00:00:00.000 Started by upstream project "autotest-per-patch" build number 121329 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.156 Fetching changes from the remote Git repository 00:00:00.158 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.199 Using shallow fetch with depth 1 00:00:00.199 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.199 > git --version # timeout=10 00:00:00.234 > git --version # 'git version 2.39.2' 00:00:00.234 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.234 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.234 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.565 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.577 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.588 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:04.588 > git config core.sparsecheckout # timeout=10 00:00:04.599 > git read-tree -mu HEAD # timeout=10 00:00:04.616 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:04.638 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:04.638 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:04.715 [Pipeline] Start of Pipeline 00:00:04.728 [Pipeline] library 00:00:04.729 Loading library shm_lib@master 00:00:04.730 Library shm_lib@master is cached. Copying from home. 00:00:04.746 [Pipeline] node 00:00:04.759 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.760 [Pipeline] { 00:00:04.769 [Pipeline] catchError 00:00:04.770 [Pipeline] { 00:00:04.783 [Pipeline] wrap 00:00:04.790 [Pipeline] { 00:00:04.795 [Pipeline] stage 00:00:04.797 [Pipeline] { (Prologue) 00:00:04.982 [Pipeline] sh 00:00:05.264 + logger -p user.info -t JENKINS-CI 00:00:05.279 [Pipeline] echo 00:00:05.280 Node: CYP12 00:00:05.287 [Pipeline] sh 00:00:05.586 [Pipeline] setCustomBuildProperty 00:00:05.601 [Pipeline] echo 00:00:05.602 Cleanup processes 00:00:05.607 [Pipeline] sh 00:00:05.890 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.890 57501 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.905 [Pipeline] sh 00:00:06.192 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.192 ++ grep -v 'sudo pgrep' 00:00:06.192 ++ awk '{print $1}' 00:00:06.192 + sudo kill -9 00:00:06.192 + true 00:00:06.207 [Pipeline] cleanWs 00:00:06.216 [WS-CLEANUP] Deleting project workspace... 00:00:06.217 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.223 [WS-CLEANUP] done 00:00:06.227 [Pipeline] setCustomBuildProperty 00:00:06.241 [Pipeline] sh 00:00:06.522 + sudo git config --global --replace-all safe.directory '*' 00:00:06.589 [Pipeline] nodesByLabel 00:00:06.591 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.599 [Pipeline] httpRequest 00:00:06.603 HttpMethod: GET 00:00:06.604 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:06.607 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:06.610 Response Code: HTTP/1.1 200 OK 00:00:06.611 Success: Status code 200 is in the accepted range: 200,404 00:00:06.612 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:07.460 [Pipeline] sh 00:00:07.742 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:07.761 [Pipeline] httpRequest 00:00:07.765 HttpMethod: GET 00:00:07.766 URL: http://10.211.164.96/packages/spdk_f1d799ad0fe1d22327ef95d09e13fcddef47c626.tar.gz 00:00:07.767 Sending request to url: http://10.211.164.96/packages/spdk_f1d799ad0fe1d22327ef95d09e13fcddef47c626.tar.gz 00:00:07.778 Response Code: HTTP/1.1 200 OK 00:00:07.779 Success: Status code 200 is in the accepted range: 200,404 00:00:07.779 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f1d799ad0fe1d22327ef95d09e13fcddef47c626.tar.gz 00:00:39.861 [Pipeline] sh 00:00:40.144 + tar --no-same-owner -xf spdk_f1d799ad0fe1d22327ef95d09e13fcddef47c626.tar.gz 00:00:43.491 [Pipeline] sh 00:00:43.774 + git -C spdk log --oneline -n5 00:00:43.774 f1d799ad0 bdev: use local variable when tallying io histogram 00:00:43.774 e267a0e11 bdev: do not try to track ioch elapsed time in trace 00:00:43.774 756b1ecbb bdev: register and use trace owners 00:00:43.774 b7127eca5 nvmf/tcp: register and use trace owners 00:00:43.774 e12855158 nvmf/tcp: add nvmf_qpair_set_ctrlr helper function 00:00:43.787 [Pipeline] } 00:00:43.804 [Pipeline] // stage 00:00:43.813 [Pipeline] stage 00:00:43.815 [Pipeline] { (Prepare) 00:00:43.835 [Pipeline] writeFile 00:00:43.850 [Pipeline] sh 00:00:44.134 + logger -p user.info -t JENKINS-CI 00:00:44.148 [Pipeline] sh 00:00:44.433 + logger -p user.info -t JENKINS-CI 00:00:44.447 [Pipeline] sh 00:00:44.732 + cat autorun-spdk.conf 00:00:44.732 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.732 SPDK_TEST_NVMF=1 00:00:44.732 SPDK_TEST_NVME_CLI=1 00:00:44.732 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.732 SPDK_TEST_NVMF_NICS=e810 00:00:44.732 SPDK_TEST_VFIOUSER=1 00:00:44.732 SPDK_RUN_UBSAN=1 00:00:44.732 NET_TYPE=phy 00:00:44.740 RUN_NIGHTLY=0 00:00:44.745 [Pipeline] readFile 00:00:44.771 [Pipeline] withEnv 00:00:44.774 [Pipeline] { 00:00:44.788 [Pipeline] sh 00:00:45.077 + set -ex 00:00:45.077 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:45.077 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:45.077 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.077 ++ SPDK_TEST_NVMF=1 00:00:45.077 ++ SPDK_TEST_NVME_CLI=1 00:00:45.077 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.077 ++ SPDK_TEST_NVMF_NICS=e810 00:00:45.077 ++ SPDK_TEST_VFIOUSER=1 00:00:45.077 ++ SPDK_RUN_UBSAN=1 00:00:45.077 ++ NET_TYPE=phy 00:00:45.077 ++ RUN_NIGHTLY=0 00:00:45.077 + case $SPDK_TEST_NVMF_NICS in 00:00:45.077 + DRIVERS=ice 00:00:45.077 + [[ tcp == \r\d\m\a ]] 00:00:45.077 + [[ -n ice ]] 00:00:45.077 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:45.077 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:45.077 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:45.077 rmmod: ERROR: Module irdma is not currently loaded 00:00:45.077 rmmod: ERROR: Module i40iw is not currently loaded 00:00:45.077 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:45.077 + true 00:00:45.077 + for D in $DRIVERS 00:00:45.077 + sudo modprobe ice 00:00:45.077 + exit 0 00:00:45.087 [Pipeline] } 00:00:45.106 [Pipeline] // withEnv 00:00:45.111 [Pipeline] } 00:00:45.130 [Pipeline] // stage 00:00:45.143 [Pipeline] catchError 00:00:45.145 [Pipeline] { 00:00:45.163 [Pipeline] timeout 00:00:45.163 Timeout set to expire in 40 min 00:00:45.165 [Pipeline] { 00:00:45.181 [Pipeline] stage 00:00:45.184 [Pipeline] { (Tests) 00:00:45.202 [Pipeline] sh 00:00:45.493 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.493 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.493 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.493 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:45.493 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:45.493 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.493 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:45.493 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.493 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.493 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.493 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.493 + source /etc/os-release 00:00:45.493 ++ NAME='Fedora Linux' 00:00:45.493 ++ VERSION='38 (Cloud Edition)' 00:00:45.493 ++ ID=fedora 00:00:45.493 ++ VERSION_ID=38 00:00:45.493 ++ VERSION_CODENAME= 00:00:45.493 ++ PLATFORM_ID=platform:f38 00:00:45.493 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:45.493 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:45.493 ++ LOGO=fedora-logo-icon 00:00:45.493 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:45.493 ++ HOME_URL=https://fedoraproject.org/ 00:00:45.493 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:45.493 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:45.493 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:45.493 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:45.493 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:45.493 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:45.493 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:45.493 ++ SUPPORT_END=2024-05-14 00:00:45.493 ++ VARIANT='Cloud Edition' 00:00:45.493 ++ VARIANT_ID=cloud 00:00:45.493 + uname -a 00:00:45.493 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:45.493 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:48.037 Hugepages 00:00:48.037 node hugesize free / total 00:00:48.037 node0 1048576kB 0 / 0 00:00:48.037 node0 2048kB 0 / 0 00:00:48.037 node1 1048576kB 0 / 0 00:00:48.037 node1 2048kB 0 / 0 00:00:48.037 00:00:48.037 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:48.037 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:48.037 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:48.037 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:48.037 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:48.037 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:48.037 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:48.037 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:48.037 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:48.298 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:48.298 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:48.298 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:48.298 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:48.298 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:48.298 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:48.298 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:48.298 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:48.298 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:48.298 + rm -f /tmp/spdk-ld-path 00:00:48.298 + source autorun-spdk.conf 00:00:48.298 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.298 ++ SPDK_TEST_NVMF=1 00:00:48.298 ++ SPDK_TEST_NVME_CLI=1 00:00:48.298 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.298 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.298 ++ SPDK_TEST_VFIOUSER=1 00:00:48.298 ++ SPDK_RUN_UBSAN=1 00:00:48.298 ++ NET_TYPE=phy 00:00:48.298 ++ RUN_NIGHTLY=0 00:00:48.298 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:48.298 + [[ -n '' ]] 00:00:48.298 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:48.298 + for M in /var/spdk/build-*-manifest.txt 00:00:48.298 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:48.298 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.298 + for M in /var/spdk/build-*-manifest.txt 00:00:48.298 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:48.298 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.298 ++ uname 00:00:48.298 + [[ Linux == \L\i\n\u\x ]] 00:00:48.298 + sudo dmesg -T 00:00:48.298 + sudo dmesg --clear 00:00:48.298 + dmesg_pid=59065 00:00:48.298 + [[ Fedora Linux == FreeBSD ]] 00:00:48.298 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.298 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.298 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:48.298 + [[ -x /usr/src/fio-static/fio ]] 00:00:48.298 + export FIO_BIN=/usr/src/fio-static/fio 00:00:48.298 + FIO_BIN=/usr/src/fio-static/fio 00:00:48.298 + sudo dmesg -Tw 00:00:48.298 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:48.298 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:48.298 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:48.298 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.298 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.298 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:48.298 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.298 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.298 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:48.298 Test configuration: 00:00:48.298 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.298 SPDK_TEST_NVMF=1 00:00:48.298 SPDK_TEST_NVME_CLI=1 00:00:48.298 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.298 SPDK_TEST_NVMF_NICS=e810 00:00:48.298 SPDK_TEST_VFIOUSER=1 00:00:48.298 SPDK_RUN_UBSAN=1 00:00:48.298 NET_TYPE=phy 00:00:48.559 RUN_NIGHTLY=0 23:45:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:48.559 23:45:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:48.559 23:45:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:48.559 23:45:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:48.559 23:45:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.559 23:45:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.559 23:45:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.559 23:45:18 -- paths/export.sh@5 -- $ export PATH 00:00:48.559 23:45:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.559 23:45:18 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:48.559 23:45:18 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:48.559 23:45:18 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714167918.XXXXXX 00:00:48.559 23:45:18 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714167918.33bVhx 00:00:48.559 23:45:18 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:48.559 23:45:18 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:48.559 23:45:18 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:48.559 23:45:18 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:48.560 23:45:18 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:48.560 23:45:18 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:48.560 23:45:18 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:48.560 23:45:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.560 23:45:18 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:48.560 23:45:18 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:48.560 23:45:18 -- pm/common@17 -- $ local monitor 00:00:48.560 23:45:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.560 23:45:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=59099 00:00:48.560 23:45:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.560 23:45:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=59101 00:00:48.560 23:45:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.560 23:45:18 -- pm/common@21 -- $ date +%s 00:00:48.560 23:45:18 -- pm/common@21 -- $ date +%s 00:00:48.560 23:45:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=59103 00:00:48.560 23:45:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.560 23:45:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=59107 00:00:48.560 23:45:18 -- pm/common@26 -- $ sleep 1 00:00:48.560 23:45:18 -- pm/common@21 -- $ date +%s 00:00:48.560 23:45:18 -- pm/common@21 -- $ date +%s 00:00:48.560 23:45:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714167918 00:00:48.560 23:45:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714167918 00:00:48.560 23:45:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714167918 00:00:48.560 23:45:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714167918 00:00:48.560 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714167918_collect-vmstat.pm.log 00:00:48.560 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714167918_collect-bmc-pm.bmc.pm.log 00:00:48.560 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714167918_collect-cpu-load.pm.log 00:00:48.560 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714167918_collect-cpu-temp.pm.log 00:00:49.501 23:45:19 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:49.501 23:45:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.501 23:45:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.501 23:45:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.501 23:45:19 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.501 Fri Apr 26 09:45:19 PM UTC 2024 00:00:49.501 23:45:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.501 v24.05-pre-459-gf1d799ad0 00:00:49.501 23:45:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.501 23:45:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.501 23:45:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.501 23:45:19 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:49.501 23:45:19 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:49.501 23:45:19 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.762 ************************************ 00:00:49.762 START TEST ubsan 00:00:49.762 ************************************ 00:00:49.762 23:45:19 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:49.762 using ubsan 00:00:49.762 00:00:49.762 real 0m0.001s 00:00:49.762 user 0m0.000s 00:00:49.762 sys 0m0.000s 00:00:49.762 23:45:19 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:49.762 23:45:19 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.762 ************************************ 00:00:49.762 END TEST ubsan 00:00:49.762 ************************************ 00:00:49.762 23:45:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:49.762 23:45:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:49.762 23:45:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:49.762 23:45:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:49.762 23:45:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:49.762 23:45:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:49.762 23:45:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:49.762 23:45:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:49.762 23:45:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:50.023 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:50.023 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:50.284 Using 'verbs' RDMA provider 00:01:05.764 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:18.030 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:18.030 Creating mk/config.mk...done. 00:01:18.030 Creating mk/cc.flags.mk...done. 00:01:18.030 Type 'make' to build. 00:01:18.030 23:45:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:18.030 23:45:47 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:18.030 23:45:47 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:18.030 23:45:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.030 ************************************ 00:01:18.030 START TEST make 00:01:18.030 ************************************ 00:01:18.030 23:45:47 -- common/autotest_common.sh@1111 -- $ make -j144 00:01:18.030 make[1]: Nothing to be done for 'all'. 00:01:19.078 The Meson build system 00:01:19.078 Version: 1.3.1 00:01:19.078 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:19.078 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:19.078 Build type: native build 00:01:19.078 Project name: libvfio-user 00:01:19.078 Project version: 0.0.1 00:01:19.078 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:19.078 C linker for the host machine: cc ld.bfd 2.39-16 00:01:19.078 Host machine cpu family: x86_64 00:01:19.078 Host machine cpu: x86_64 00:01:19.078 Run-time dependency threads found: YES 00:01:19.078 Library dl found: YES 00:01:19.078 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:19.078 Run-time dependency json-c found: YES 0.17 00:01:19.078 Run-time dependency cmocka found: YES 1.1.7 00:01:19.078 Program pytest-3 found: NO 00:01:19.078 Program flake8 found: NO 00:01:19.078 Program misspell-fixer found: NO 00:01:19.078 Program restructuredtext-lint found: NO 00:01:19.078 Program valgrind found: YES (/usr/bin/valgrind) 00:01:19.078 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:19.078 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:19.078 Compiler for C supports arguments -Wwrite-strings: YES 00:01:19.078 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:19.078 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:19.078 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:19.078 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:19.078 Build targets in project: 8 00:01:19.078 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:19.078 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:19.078 00:01:19.078 libvfio-user 0.0.1 00:01:19.078 00:01:19.078 User defined options 00:01:19.078 buildtype : debug 00:01:19.078 default_library: shared 00:01:19.078 libdir : /usr/local/lib 00:01:19.078 00:01:19.078 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:19.078 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:19.336 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:19.336 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:19.336 [3/37] Compiling C object samples/null.p/null.c.o 00:01:19.336 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:19.336 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:19.336 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:19.336 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:19.336 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:19.336 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:19.336 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:19.336 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:19.336 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:19.336 [13/37] Compiling C object samples/server.p/server.c.o 00:01:19.336 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:19.336 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:19.336 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:19.336 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:19.336 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:19.336 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:19.336 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:19.336 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:19.337 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:19.337 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:19.337 [24/37] Compiling C object samples/client.p/client.c.o 00:01:19.337 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:19.337 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:19.337 [27/37] Linking target samples/client 00:01:19.337 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:19.337 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:19.337 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:19.595 [31/37] Linking target test/unit_tests 00:01:19.595 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:19.595 [33/37] Linking target samples/null 00:01:19.595 [34/37] Linking target samples/gpio-pci-idio-16 00:01:19.595 [35/37] Linking target samples/server 00:01:19.595 [36/37] Linking target samples/lspci 00:01:19.595 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:19.595 INFO: autodetecting backend as ninja 00:01:19.595 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:19.595 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:20.043 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:20.043 ninja: no work to do. 00:01:26.630 The Meson build system 00:01:26.630 Version: 1.3.1 00:01:26.630 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:26.631 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:26.631 Build type: native build 00:01:26.631 Program cat found: YES (/usr/bin/cat) 00:01:26.631 Project name: DPDK 00:01:26.631 Project version: 23.11.0 00:01:26.631 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:26.631 C linker for the host machine: cc ld.bfd 2.39-16 00:01:26.631 Host machine cpu family: x86_64 00:01:26.631 Host machine cpu: x86_64 00:01:26.631 Message: ## Building in Developer Mode ## 00:01:26.631 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:26.631 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:26.631 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:26.631 Program python3 found: YES (/usr/bin/python3) 00:01:26.631 Program cat found: YES (/usr/bin/cat) 00:01:26.631 Compiler for C supports arguments -march=native: YES 00:01:26.631 Checking for size of "void *" : 8 00:01:26.631 Checking for size of "void *" : 8 (cached) 00:01:26.631 Library m found: YES 00:01:26.631 Library numa found: YES 00:01:26.631 Has header "numaif.h" : YES 00:01:26.631 Library fdt found: NO 00:01:26.631 Library execinfo found: NO 00:01:26.631 Has header "execinfo.h" : YES 00:01:26.631 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:26.631 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:26.631 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:26.631 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:26.631 Run-time dependency openssl found: YES 3.0.9 00:01:26.631 Run-time dependency libpcap found: YES 1.10.4 00:01:26.631 Has header "pcap.h" with dependency libpcap: YES 00:01:26.631 Compiler for C supports arguments -Wcast-qual: YES 00:01:26.631 Compiler for C supports arguments -Wdeprecated: YES 00:01:26.631 Compiler for C supports arguments -Wformat: YES 00:01:26.631 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:26.631 Compiler for C supports arguments -Wformat-security: NO 00:01:26.631 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:26.631 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:26.631 Compiler for C supports arguments -Wnested-externs: YES 00:01:26.631 Compiler for C supports arguments -Wold-style-definition: YES 00:01:26.631 Compiler for C supports arguments -Wpointer-arith: YES 00:01:26.631 Compiler for C supports arguments -Wsign-compare: YES 00:01:26.631 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:26.631 Compiler for C supports arguments -Wundef: YES 00:01:26.631 Compiler for C supports arguments -Wwrite-strings: YES 00:01:26.631 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:26.631 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:26.631 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:26.631 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:26.631 Program objdump found: YES (/usr/bin/objdump) 00:01:26.631 Compiler for C supports arguments -mavx512f: YES 00:01:26.631 Checking if "AVX512 checking" compiles: YES 00:01:26.631 Fetching value of define "__SSE4_2__" : 1 00:01:26.631 Fetching value of define "__AES__" : 1 00:01:26.631 Fetching value of define "__AVX__" : 1 00:01:26.631 Fetching value of define "__AVX2__" : 1 00:01:26.631 Fetching value of define "__AVX512BW__" : 1 00:01:26.631 Fetching value of define "__AVX512CD__" : 1 00:01:26.631 Fetching value of define "__AVX512DQ__" : 1 00:01:26.631 Fetching value of define "__AVX512F__" : 1 00:01:26.631 Fetching value of define "__AVX512VL__" : 1 00:01:26.631 Fetching value of define "__PCLMUL__" : 1 00:01:26.631 Fetching value of define "__RDRND__" : 1 00:01:26.631 Fetching value of define "__RDSEED__" : 1 00:01:26.631 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:26.631 Fetching value of define "__znver1__" : (undefined) 00:01:26.631 Fetching value of define "__znver2__" : (undefined) 00:01:26.631 Fetching value of define "__znver3__" : (undefined) 00:01:26.631 Fetching value of define "__znver4__" : (undefined) 00:01:26.631 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:26.631 Message: lib/log: Defining dependency "log" 00:01:26.631 Message: lib/kvargs: Defining dependency "kvargs" 00:01:26.631 Message: lib/telemetry: Defining dependency "telemetry" 00:01:26.631 Checking for function "getentropy" : NO 00:01:26.631 Message: lib/eal: Defining dependency "eal" 00:01:26.631 Message: lib/ring: Defining dependency "ring" 00:01:26.631 Message: lib/rcu: Defining dependency "rcu" 00:01:26.631 Message: lib/mempool: Defining dependency "mempool" 00:01:26.631 Message: lib/mbuf: Defining dependency "mbuf" 00:01:26.631 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:26.631 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:26.631 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:26.631 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:26.631 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:26.631 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:26.631 Compiler for C supports arguments -mpclmul: YES 00:01:26.631 Compiler for C supports arguments -maes: YES 00:01:26.631 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:26.631 Compiler for C supports arguments -mavx512bw: YES 00:01:26.631 Compiler for C supports arguments -mavx512dq: YES 00:01:26.631 Compiler for C supports arguments -mavx512vl: YES 00:01:26.631 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:26.631 Compiler for C supports arguments -mavx2: YES 00:01:26.631 Compiler for C supports arguments -mavx: YES 00:01:26.631 Message: lib/net: Defining dependency "net" 00:01:26.631 Message: lib/meter: Defining dependency "meter" 00:01:26.631 Message: lib/ethdev: Defining dependency "ethdev" 00:01:26.631 Message: lib/pci: Defining dependency "pci" 00:01:26.631 Message: lib/cmdline: Defining dependency "cmdline" 00:01:26.631 Message: lib/hash: Defining dependency "hash" 00:01:26.631 Message: lib/timer: Defining dependency "timer" 00:01:26.631 Message: lib/compressdev: Defining dependency "compressdev" 00:01:26.631 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:26.631 Message: lib/dmadev: Defining dependency "dmadev" 00:01:26.631 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:26.631 Message: lib/power: Defining dependency "power" 00:01:26.631 Message: lib/reorder: Defining dependency "reorder" 00:01:26.631 Message: lib/security: Defining dependency "security" 00:01:26.631 Has header "linux/userfaultfd.h" : YES 00:01:26.631 Has header "linux/vduse.h" : YES 00:01:26.631 Message: lib/vhost: Defining dependency "vhost" 00:01:26.631 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:26.631 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:26.631 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:26.631 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:26.631 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:26.631 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:26.631 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:26.631 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:26.631 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:26.631 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:26.631 Program doxygen found: YES (/usr/bin/doxygen) 00:01:26.631 Configuring doxy-api-html.conf using configuration 00:01:26.631 Configuring doxy-api-man.conf using configuration 00:01:26.631 Program mandb found: YES (/usr/bin/mandb) 00:01:26.631 Program sphinx-build found: NO 00:01:26.631 Configuring rte_build_config.h using configuration 00:01:26.631 Message: 00:01:26.631 ================= 00:01:26.631 Applications Enabled 00:01:26.631 ================= 00:01:26.631 00:01:26.631 apps: 00:01:26.631 00:01:26.631 00:01:26.631 Message: 00:01:26.631 ================= 00:01:26.631 Libraries Enabled 00:01:26.631 ================= 00:01:26.631 00:01:26.631 libs: 00:01:26.631 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:26.631 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:26.631 cryptodev, dmadev, power, reorder, security, vhost, 00:01:26.631 00:01:26.631 Message: 00:01:26.631 =============== 00:01:26.631 Drivers Enabled 00:01:26.631 =============== 00:01:26.631 00:01:26.631 common: 00:01:26.631 00:01:26.631 bus: 00:01:26.631 pci, vdev, 00:01:26.631 mempool: 00:01:26.631 ring, 00:01:26.631 dma: 00:01:26.631 00:01:26.631 net: 00:01:26.631 00:01:26.631 crypto: 00:01:26.631 00:01:26.631 compress: 00:01:26.631 00:01:26.631 vdpa: 00:01:26.631 00:01:26.631 00:01:26.631 Message: 00:01:26.631 ================= 00:01:26.631 Content Skipped 00:01:26.631 ================= 00:01:26.631 00:01:26.631 apps: 00:01:26.631 dumpcap: explicitly disabled via build config 00:01:26.631 graph: explicitly disabled via build config 00:01:26.631 pdump: explicitly disabled via build config 00:01:26.631 proc-info: explicitly disabled via build config 00:01:26.631 test-acl: explicitly disabled via build config 00:01:26.631 test-bbdev: explicitly disabled via build config 00:01:26.631 test-cmdline: explicitly disabled via build config 00:01:26.631 test-compress-perf: explicitly disabled via build config 00:01:26.631 test-crypto-perf: explicitly disabled via build config 00:01:26.631 test-dma-perf: explicitly disabled via build config 00:01:26.631 test-eventdev: explicitly disabled via build config 00:01:26.631 test-fib: explicitly disabled via build config 00:01:26.631 test-flow-perf: explicitly disabled via build config 00:01:26.631 test-gpudev: explicitly disabled via build config 00:01:26.631 test-mldev: explicitly disabled via build config 00:01:26.631 test-pipeline: explicitly disabled via build config 00:01:26.631 test-pmd: explicitly disabled via build config 00:01:26.631 test-regex: explicitly disabled via build config 00:01:26.631 test-sad: explicitly disabled via build config 00:01:26.631 test-security-perf: explicitly disabled via build config 00:01:26.631 00:01:26.631 libs: 00:01:26.631 metrics: explicitly disabled via build config 00:01:26.631 acl: explicitly disabled via build config 00:01:26.631 bbdev: explicitly disabled via build config 00:01:26.631 bitratestats: explicitly disabled via build config 00:01:26.632 bpf: explicitly disabled via build config 00:01:26.632 cfgfile: explicitly disabled via build config 00:01:26.632 distributor: explicitly disabled via build config 00:01:26.632 efd: explicitly disabled via build config 00:01:26.632 eventdev: explicitly disabled via build config 00:01:26.632 dispatcher: explicitly disabled via build config 00:01:26.632 gpudev: explicitly disabled via build config 00:01:26.632 gro: explicitly disabled via build config 00:01:26.632 gso: explicitly disabled via build config 00:01:26.632 ip_frag: explicitly disabled via build config 00:01:26.632 jobstats: explicitly disabled via build config 00:01:26.632 latencystats: explicitly disabled via build config 00:01:26.632 lpm: explicitly disabled via build config 00:01:26.632 member: explicitly disabled via build config 00:01:26.632 pcapng: explicitly disabled via build config 00:01:26.632 rawdev: explicitly disabled via build config 00:01:26.632 regexdev: explicitly disabled via build config 00:01:26.632 mldev: explicitly disabled via build config 00:01:26.632 rib: explicitly disabled via build config 00:01:26.632 sched: explicitly disabled via build config 00:01:26.632 stack: explicitly disabled via build config 00:01:26.632 ipsec: explicitly disabled via build config 00:01:26.632 pdcp: explicitly disabled via build config 00:01:26.632 fib: explicitly disabled via build config 00:01:26.632 port: explicitly disabled via build config 00:01:26.632 pdump: explicitly disabled via build config 00:01:26.632 table: explicitly disabled via build config 00:01:26.632 pipeline: explicitly disabled via build config 00:01:26.632 graph: explicitly disabled via build config 00:01:26.632 node: explicitly disabled via build config 00:01:26.632 00:01:26.632 drivers: 00:01:26.632 common/cpt: not in enabled drivers build config 00:01:26.632 common/dpaax: not in enabled drivers build config 00:01:26.632 common/iavf: not in enabled drivers build config 00:01:26.632 common/idpf: not in enabled drivers build config 00:01:26.632 common/mvep: not in enabled drivers build config 00:01:26.632 common/octeontx: not in enabled drivers build config 00:01:26.632 bus/auxiliary: not in enabled drivers build config 00:01:26.632 bus/cdx: not in enabled drivers build config 00:01:26.632 bus/dpaa: not in enabled drivers build config 00:01:26.632 bus/fslmc: not in enabled drivers build config 00:01:26.632 bus/ifpga: not in enabled drivers build config 00:01:26.632 bus/platform: not in enabled drivers build config 00:01:26.632 bus/vmbus: not in enabled drivers build config 00:01:26.632 common/cnxk: not in enabled drivers build config 00:01:26.632 common/mlx5: not in enabled drivers build config 00:01:26.632 common/nfp: not in enabled drivers build config 00:01:26.632 common/qat: not in enabled drivers build config 00:01:26.632 common/sfc_efx: not in enabled drivers build config 00:01:26.632 mempool/bucket: not in enabled drivers build config 00:01:26.632 mempool/cnxk: not in enabled drivers build config 00:01:26.632 mempool/dpaa: not in enabled drivers build config 00:01:26.632 mempool/dpaa2: not in enabled drivers build config 00:01:26.632 mempool/octeontx: not in enabled drivers build config 00:01:26.632 mempool/stack: not in enabled drivers build config 00:01:26.632 dma/cnxk: not in enabled drivers build config 00:01:26.632 dma/dpaa: not in enabled drivers build config 00:01:26.632 dma/dpaa2: not in enabled drivers build config 00:01:26.632 dma/hisilicon: not in enabled drivers build config 00:01:26.632 dma/idxd: not in enabled drivers build config 00:01:26.632 dma/ioat: not in enabled drivers build config 00:01:26.632 dma/skeleton: not in enabled drivers build config 00:01:26.632 net/af_packet: not in enabled drivers build config 00:01:26.632 net/af_xdp: not in enabled drivers build config 00:01:26.632 net/ark: not in enabled drivers build config 00:01:26.632 net/atlantic: not in enabled drivers build config 00:01:26.632 net/avp: not in enabled drivers build config 00:01:26.632 net/axgbe: not in enabled drivers build config 00:01:26.632 net/bnx2x: not in enabled drivers build config 00:01:26.632 net/bnxt: not in enabled drivers build config 00:01:26.632 net/bonding: not in enabled drivers build config 00:01:26.632 net/cnxk: not in enabled drivers build config 00:01:26.632 net/cpfl: not in enabled drivers build config 00:01:26.632 net/cxgbe: not in enabled drivers build config 00:01:26.632 net/dpaa: not in enabled drivers build config 00:01:26.632 net/dpaa2: not in enabled drivers build config 00:01:26.632 net/e1000: not in enabled drivers build config 00:01:26.632 net/ena: not in enabled drivers build config 00:01:26.632 net/enetc: not in enabled drivers build config 00:01:26.632 net/enetfec: not in enabled drivers build config 00:01:26.632 net/enic: not in enabled drivers build config 00:01:26.632 net/failsafe: not in enabled drivers build config 00:01:26.632 net/fm10k: not in enabled drivers build config 00:01:26.632 net/gve: not in enabled drivers build config 00:01:26.632 net/hinic: not in enabled drivers build config 00:01:26.632 net/hns3: not in enabled drivers build config 00:01:26.632 net/i40e: not in enabled drivers build config 00:01:26.632 net/iavf: not in enabled drivers build config 00:01:26.632 net/ice: not in enabled drivers build config 00:01:26.632 net/idpf: not in enabled drivers build config 00:01:26.632 net/igc: not in enabled drivers build config 00:01:26.632 net/ionic: not in enabled drivers build config 00:01:26.632 net/ipn3ke: not in enabled drivers build config 00:01:26.632 net/ixgbe: not in enabled drivers build config 00:01:26.632 net/mana: not in enabled drivers build config 00:01:26.632 net/memif: not in enabled drivers build config 00:01:26.632 net/mlx4: not in enabled drivers build config 00:01:26.632 net/mlx5: not in enabled drivers build config 00:01:26.632 net/mvneta: not in enabled drivers build config 00:01:26.632 net/mvpp2: not in enabled drivers build config 00:01:26.632 net/netvsc: not in enabled drivers build config 00:01:26.632 net/nfb: not in enabled drivers build config 00:01:26.632 net/nfp: not in enabled drivers build config 00:01:26.632 net/ngbe: not in enabled drivers build config 00:01:26.632 net/null: not in enabled drivers build config 00:01:26.632 net/octeontx: not in enabled drivers build config 00:01:26.632 net/octeon_ep: not in enabled drivers build config 00:01:26.632 net/pcap: not in enabled drivers build config 00:01:26.632 net/pfe: not in enabled drivers build config 00:01:26.632 net/qede: not in enabled drivers build config 00:01:26.632 net/ring: not in enabled drivers build config 00:01:26.632 net/sfc: not in enabled drivers build config 00:01:26.632 net/softnic: not in enabled drivers build config 00:01:26.632 net/tap: not in enabled drivers build config 00:01:26.632 net/thunderx: not in enabled drivers build config 00:01:26.632 net/txgbe: not in enabled drivers build config 00:01:26.632 net/vdev_netvsc: not in enabled drivers build config 00:01:26.632 net/vhost: not in enabled drivers build config 00:01:26.632 net/virtio: not in enabled drivers build config 00:01:26.632 net/vmxnet3: not in enabled drivers build config 00:01:26.632 raw/*: missing internal dependency, "rawdev" 00:01:26.632 crypto/armv8: not in enabled drivers build config 00:01:26.632 crypto/bcmfs: not in enabled drivers build config 00:01:26.632 crypto/caam_jr: not in enabled drivers build config 00:01:26.632 crypto/ccp: not in enabled drivers build config 00:01:26.632 crypto/cnxk: not in enabled drivers build config 00:01:26.632 crypto/dpaa_sec: not in enabled drivers build config 00:01:26.632 crypto/dpaa2_sec: not in enabled drivers build config 00:01:26.632 crypto/ipsec_mb: not in enabled drivers build config 00:01:26.632 crypto/mlx5: not in enabled drivers build config 00:01:26.632 crypto/mvsam: not in enabled drivers build config 00:01:26.632 crypto/nitrox: not in enabled drivers build config 00:01:26.632 crypto/null: not in enabled drivers build config 00:01:26.632 crypto/octeontx: not in enabled drivers build config 00:01:26.632 crypto/openssl: not in enabled drivers build config 00:01:26.632 crypto/scheduler: not in enabled drivers build config 00:01:26.632 crypto/uadk: not in enabled drivers build config 00:01:26.632 crypto/virtio: not in enabled drivers build config 00:01:26.632 compress/isal: not in enabled drivers build config 00:01:26.632 compress/mlx5: not in enabled drivers build config 00:01:26.632 compress/octeontx: not in enabled drivers build config 00:01:26.632 compress/zlib: not in enabled drivers build config 00:01:26.632 regex/*: missing internal dependency, "regexdev" 00:01:26.632 ml/*: missing internal dependency, "mldev" 00:01:26.632 vdpa/ifc: not in enabled drivers build config 00:01:26.632 vdpa/mlx5: not in enabled drivers build config 00:01:26.632 vdpa/nfp: not in enabled drivers build config 00:01:26.632 vdpa/sfc: not in enabled drivers build config 00:01:26.632 event/*: missing internal dependency, "eventdev" 00:01:26.632 baseband/*: missing internal dependency, "bbdev" 00:01:26.632 gpu/*: missing internal dependency, "gpudev" 00:01:26.632 00:01:26.632 00:01:26.632 Build targets in project: 84 00:01:26.632 00:01:26.632 DPDK 23.11.0 00:01:26.632 00:01:26.632 User defined options 00:01:26.632 buildtype : debug 00:01:26.632 default_library : shared 00:01:26.632 libdir : lib 00:01:26.632 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:26.632 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:26.632 c_link_args : 00:01:26.632 cpu_instruction_set: native 00:01:26.632 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:26.632 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:26.632 enable_docs : false 00:01:26.632 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:26.632 enable_kmods : false 00:01:26.632 tests : false 00:01:26.632 00:01:26.632 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:26.632 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:26.632 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:26.632 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:26.632 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:26.633 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:26.633 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:26.633 [6/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:26.633 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:26.633 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:26.633 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:26.633 [10/264] Linking static target lib/librte_kvargs.a 00:01:26.633 [11/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:26.633 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:26.633 [13/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:26.633 [14/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:26.633 [15/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:26.633 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:26.633 [17/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:26.633 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:26.633 [19/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:26.633 [20/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:26.633 [21/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:26.633 [22/264] Linking static target lib/librte_log.a 00:01:26.633 [23/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:26.633 [24/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:26.633 [25/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:26.633 [26/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:26.633 [27/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:26.633 [28/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:26.633 [29/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:26.633 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:26.633 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:26.633 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:26.891 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:26.891 [34/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:26.891 [35/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:26.891 [36/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:26.891 [37/264] Linking static target lib/librte_pci.a 00:01:26.891 [38/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:26.891 [39/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:26.891 [40/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:26.891 [41/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:26.891 [42/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:26.891 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:26.891 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:26.891 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:26.891 [46/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:26.891 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:26.891 [48/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.891 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:27.151 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:27.151 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:27.151 [52/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:27.151 [53/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:27.151 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:27.151 [55/264] Linking static target lib/librte_meter.a 00:01:27.151 [56/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.151 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:27.151 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:27.151 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:27.151 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:27.151 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:27.151 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:27.151 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:27.151 [64/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:27.151 [65/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:27.151 [66/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:27.151 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:27.151 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:27.151 [69/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:27.151 [70/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:27.151 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:27.151 [72/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:27.151 [73/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:27.151 [74/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:27.151 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:27.151 [76/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:27.151 [77/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:27.151 [78/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:27.151 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:27.151 [80/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:27.151 [81/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:27.151 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:27.151 [83/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:27.151 [84/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:27.151 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:27.151 [86/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:27.151 [87/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:27.151 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:27.151 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:27.151 [90/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:27.151 [91/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:27.151 [92/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:27.151 [93/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:27.151 [94/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:27.151 [95/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:27.151 [96/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:27.151 [97/264] Linking static target lib/librte_dmadev.a 00:01:27.151 [98/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:27.151 [99/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:27.151 [100/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:27.151 [101/264] Linking static target lib/librte_ring.a 00:01:27.151 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:27.151 [103/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:27.151 [104/264] Linking static target lib/librte_telemetry.a 00:01:27.151 [105/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:27.151 [106/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:27.151 [107/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:27.151 [108/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:27.151 [109/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:27.151 [110/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:27.151 [111/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:27.151 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:27.151 [113/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:27.151 [114/264] Linking static target lib/librte_timer.a 00:01:27.151 [115/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:27.151 [116/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:27.151 [117/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:27.151 [118/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:27.151 [119/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:27.151 [120/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:27.151 [121/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:27.151 [122/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:27.151 [123/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:27.151 [124/264] Linking static target lib/librte_mempool.a 00:01:27.151 [125/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:27.151 [126/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:27.151 [127/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:27.151 [128/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:27.151 [129/264] Linking static target lib/librte_cmdline.a 00:01:27.151 [130/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:27.151 [131/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:27.151 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:27.151 [133/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:27.151 [134/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:27.151 [135/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:27.151 [136/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:27.151 [137/264] Linking static target lib/librte_rcu.a 00:01:27.151 [138/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:27.412 [139/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:27.412 [140/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.412 [141/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:27.412 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:27.412 [143/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:27.412 [144/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:27.412 [145/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:27.412 [146/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:27.412 [147/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.412 [148/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:27.412 [149/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:27.412 [150/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:27.412 [151/264] Linking static target lib/librte_power.a 00:01:27.412 [152/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:27.412 [153/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:27.412 [154/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:27.412 [155/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:27.412 [156/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:27.412 [157/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:27.412 [158/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:27.412 [159/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:27.412 [160/264] Linking static target lib/librte_reorder.a 00:01:27.412 [161/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:27.412 [162/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:27.412 [163/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:27.412 [164/264] Linking target lib/librte_log.so.24.0 00:01:27.412 [165/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:27.412 [166/264] Linking static target lib/librte_eal.a 00:01:27.412 [167/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:27.412 [168/264] Linking static target lib/librte_compressdev.a 00:01:27.412 [169/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:27.412 [170/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:27.412 [171/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:27.412 [172/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:27.412 [173/264] Linking static target lib/librte_net.a 00:01:27.412 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:27.412 [175/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:27.412 [176/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:27.412 [177/264] Linking static target lib/librte_security.a 00:01:27.412 [178/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:27.412 [179/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:27.412 [180/264] Linking static target lib/librte_mbuf.a 00:01:27.412 [181/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:27.412 [182/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:27.412 [183/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:27.412 [184/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.412 [185/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:27.412 [186/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.412 [187/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.412 [188/264] Linking static target drivers/librte_bus_vdev.a 00:01:27.412 [189/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.412 [190/264] Linking target lib/librte_kvargs.so.24.0 00:01:27.412 [191/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.675 [192/264] Linking static target drivers/librte_bus_pci.a 00:01:27.675 [193/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:27.675 [194/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:27.675 [195/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:27.675 [196/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.675 [197/264] Linking static target lib/librte_hash.a 00:01:27.675 [198/264] Linking static target drivers/librte_mempool_ring.a 00:01:27.675 [199/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.675 [200/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.675 [201/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.675 [202/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.675 [203/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:27.675 [204/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:27.675 [205/264] Linking static target lib/librte_cryptodev.a 00:01:27.675 [206/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:27.675 [207/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.675 [208/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.675 [209/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:27.675 [210/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.935 [211/264] Linking static target lib/librte_ethdev.a 00:01:27.935 [212/264] Linking target lib/librte_telemetry.so.24.0 00:01:27.935 [213/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.935 [214/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:27.935 [215/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.195 [216/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.195 [217/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:28.195 [218/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.195 [219/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.195 [220/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.456 [221/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.456 [222/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.456 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.398 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:29.398 [225/264] Linking static target lib/librte_vhost.a 00:01:29.971 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.354 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.642 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.188 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.188 [230/264] Linking target lib/librte_eal.so.24.0 00:01:39.188 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:39.188 [232/264] Linking target lib/librte_pci.so.24.0 00:01:39.188 [233/264] Linking target lib/librte_ring.so.24.0 00:01:39.188 [234/264] Linking target lib/librte_timer.so.24.0 00:01:39.188 [235/264] Linking target lib/librte_meter.so.24.0 00:01:39.188 [236/264] Linking target lib/librte_dmadev.so.24.0 00:01:39.188 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:39.450 [238/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:39.450 [239/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:39.450 [240/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:39.450 [241/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:39.450 [242/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:39.450 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:39.450 [244/264] Linking target lib/librte_rcu.so.24.0 00:01:39.450 [245/264] Linking target lib/librte_mempool.so.24.0 00:01:39.450 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:39.450 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:39.711 [248/264] Linking target lib/librte_mbuf.so.24.0 00:01:39.711 [249/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:39.712 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:39.712 [251/264] Linking target lib/librte_cryptodev.so.24.0 00:01:39.712 [252/264] Linking target lib/librte_net.so.24.0 00:01:39.712 [253/264] Linking target lib/librte_compressdev.so.24.0 00:01:39.712 [254/264] Linking target lib/librte_reorder.so.24.0 00:01:39.972 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:39.972 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:39.972 [257/264] Linking target lib/librte_security.so.24.0 00:01:39.972 [258/264] Linking target lib/librte_hash.so.24.0 00:01:39.972 [259/264] Linking target lib/librte_cmdline.so.24.0 00:01:39.972 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:40.234 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:40.234 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:40.234 [263/264] Linking target lib/librte_power.so.24.0 00:01:40.234 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:40.234 INFO: autodetecting backend as ninja 00:01:40.234 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:41.624 CC lib/ut_mock/mock.o 00:01:41.624 CC lib/log/log.o 00:01:41.624 CC lib/log/log_flags.o 00:01:41.624 CC lib/log/log_deprecated.o 00:01:41.624 CC lib/ut/ut.o 00:01:41.624 LIB libspdk_ut_mock.a 00:01:41.624 SO libspdk_ut_mock.so.6.0 00:01:41.624 LIB libspdk_log.a 00:01:41.624 LIB libspdk_ut.a 00:01:41.624 SO libspdk_log.so.7.0 00:01:41.624 SO libspdk_ut.so.2.0 00:01:41.624 SYMLINK libspdk_ut_mock.so 00:01:41.624 SYMLINK libspdk_log.so 00:01:41.624 SYMLINK libspdk_ut.so 00:01:41.885 CXX lib/trace_parser/trace.o 00:01:41.885 CC lib/dma/dma.o 00:01:41.885 CC lib/util/base64.o 00:01:41.885 CC lib/ioat/ioat.o 00:01:41.885 CC lib/util/bit_array.o 00:01:41.885 CC lib/util/cpuset.o 00:01:41.885 CC lib/util/crc16.o 00:01:41.885 CC lib/util/crc32.o 00:01:41.885 CC lib/util/crc32c.o 00:01:41.885 CC lib/util/crc32_ieee.o 00:01:41.885 CC lib/util/crc64.o 00:01:41.885 CC lib/util/dif.o 00:01:41.885 CC lib/util/fd.o 00:01:41.885 CC lib/util/file.o 00:01:41.885 CC lib/util/hexlify.o 00:01:41.885 CC lib/util/iov.o 00:01:41.885 CC lib/util/math.o 00:01:41.885 CC lib/util/pipe.o 00:01:41.885 CC lib/util/strerror_tls.o 00:01:41.885 CC lib/util/string.o 00:01:41.885 CC lib/util/uuid.o 00:01:41.885 CC lib/util/fd_group.o 00:01:41.885 CC lib/util/xor.o 00:01:41.885 CC lib/util/zipf.o 00:01:42.147 CC lib/vfio_user/host/vfio_user_pci.o 00:01:42.147 CC lib/vfio_user/host/vfio_user.o 00:01:42.147 LIB libspdk_dma.a 00:01:42.147 SO libspdk_dma.so.4.0 00:01:42.147 LIB libspdk_ioat.a 00:01:42.408 SYMLINK libspdk_dma.so 00:01:42.408 SO libspdk_ioat.so.7.0 00:01:42.408 SYMLINK libspdk_ioat.so 00:01:42.408 LIB libspdk_vfio_user.a 00:01:42.408 SO libspdk_vfio_user.so.5.0 00:01:42.408 LIB libspdk_util.a 00:01:42.408 SYMLINK libspdk_vfio_user.so 00:01:42.670 SO libspdk_util.so.9.0 00:01:42.670 SYMLINK libspdk_util.so 00:01:42.670 LIB libspdk_trace_parser.a 00:01:42.670 SO libspdk_trace_parser.so.5.0 00:01:42.930 SYMLINK libspdk_trace_parser.so 00:01:42.930 CC lib/vmd/vmd.o 00:01:42.930 CC lib/vmd/led.o 00:01:42.930 CC lib/conf/conf.o 00:01:42.930 CC lib/env_dpdk/env.o 00:01:42.930 CC lib/env_dpdk/memory.o 00:01:42.930 CC lib/env_dpdk/pci.o 00:01:42.930 CC lib/env_dpdk/init.o 00:01:42.930 CC lib/env_dpdk/pci_ioat.o 00:01:42.930 CC lib/env_dpdk/threads.o 00:01:42.930 CC lib/env_dpdk/pci_virtio.o 00:01:42.930 CC lib/env_dpdk/pci_vmd.o 00:01:42.930 CC lib/env_dpdk/pci_idxd.o 00:01:42.930 CC lib/json/json_parse.o 00:01:42.930 CC lib/env_dpdk/pci_event.o 00:01:42.930 CC lib/json/json_util.o 00:01:42.930 CC lib/env_dpdk/sigbus_handler.o 00:01:43.190 CC lib/json/json_write.o 00:01:43.190 CC lib/rdma/common.o 00:01:43.190 CC lib/env_dpdk/pci_dpdk.o 00:01:43.190 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:43.190 CC lib/rdma/rdma_verbs.o 00:01:43.190 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:43.190 CC lib/idxd/idxd.o 00:01:43.190 CC lib/idxd/idxd_user.o 00:01:43.190 LIB libspdk_conf.a 00:01:43.450 SO libspdk_conf.so.6.0 00:01:43.450 LIB libspdk_rdma.a 00:01:43.450 LIB libspdk_json.a 00:01:43.450 SO libspdk_rdma.so.6.0 00:01:43.450 SO libspdk_json.so.6.0 00:01:43.450 SYMLINK libspdk_conf.so 00:01:43.450 SYMLINK libspdk_rdma.so 00:01:43.450 SYMLINK libspdk_json.so 00:01:43.450 LIB libspdk_idxd.a 00:01:43.450 SO libspdk_idxd.so.12.0 00:01:43.711 LIB libspdk_vmd.a 00:01:43.711 SO libspdk_vmd.so.6.0 00:01:43.711 SYMLINK libspdk_idxd.so 00:01:43.711 SYMLINK libspdk_vmd.so 00:01:43.711 CC lib/jsonrpc/jsonrpc_server.o 00:01:43.711 CC lib/jsonrpc/jsonrpc_client.o 00:01:43.711 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:43.711 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:43.971 LIB libspdk_jsonrpc.a 00:01:44.233 SO libspdk_jsonrpc.so.6.0 00:01:44.233 LIB libspdk_env_dpdk.a 00:01:44.233 SYMLINK libspdk_jsonrpc.so 00:01:44.233 SO libspdk_env_dpdk.so.14.0 00:01:44.493 SYMLINK libspdk_env_dpdk.so 00:01:44.493 CC lib/rpc/rpc.o 00:01:44.755 LIB libspdk_rpc.a 00:01:44.755 SO libspdk_rpc.so.6.0 00:01:44.755 SYMLINK libspdk_rpc.so 00:01:45.327 CC lib/notify/notify.o 00:01:45.327 CC lib/trace/trace.o 00:01:45.327 CC lib/notify/notify_rpc.o 00:01:45.327 CC lib/trace/trace_flags.o 00:01:45.327 CC lib/trace/trace_rpc.o 00:01:45.327 CC lib/keyring/keyring_rpc.o 00:01:45.327 CC lib/keyring/keyring.o 00:01:45.327 LIB libspdk_notify.a 00:01:45.327 SO libspdk_notify.so.6.0 00:01:45.327 LIB libspdk_keyring.a 00:01:45.327 LIB libspdk_trace.a 00:01:45.327 SO libspdk_keyring.so.1.0 00:01:45.588 SYMLINK libspdk_notify.so 00:01:45.588 SO libspdk_trace.so.10.0 00:01:45.588 SYMLINK libspdk_keyring.so 00:01:45.588 SYMLINK libspdk_trace.so 00:01:45.849 CC lib/thread/thread.o 00:01:45.849 CC lib/thread/iobuf.o 00:01:45.849 CC lib/sock/sock.o 00:01:45.849 CC lib/sock/sock_rpc.o 00:01:46.110 LIB libspdk_sock.a 00:01:46.371 SO libspdk_sock.so.9.0 00:01:46.371 SYMLINK libspdk_sock.so 00:01:46.632 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:46.632 CC lib/nvme/nvme_ctrlr.o 00:01:46.632 CC lib/nvme/nvme_fabric.o 00:01:46.632 CC lib/nvme/nvme_ns_cmd.o 00:01:46.632 CC lib/nvme/nvme_ns.o 00:01:46.632 CC lib/nvme/nvme_pcie_common.o 00:01:46.632 CC lib/nvme/nvme_pcie.o 00:01:46.632 CC lib/nvme/nvme_qpair.o 00:01:46.632 CC lib/nvme/nvme.o 00:01:46.632 CC lib/nvme/nvme_quirks.o 00:01:46.632 CC lib/nvme/nvme_transport.o 00:01:46.632 CC lib/nvme/nvme_discovery.o 00:01:46.632 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:46.632 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:46.632 CC lib/nvme/nvme_tcp.o 00:01:46.632 CC lib/nvme/nvme_opal.o 00:01:46.632 CC lib/nvme/nvme_io_msg.o 00:01:46.632 CC lib/nvme/nvme_poll_group.o 00:01:46.632 CC lib/nvme/nvme_zns.o 00:01:46.632 CC lib/nvme/nvme_stubs.o 00:01:46.632 CC lib/nvme/nvme_auth.o 00:01:46.632 CC lib/nvme/nvme_cuse.o 00:01:46.632 CC lib/nvme/nvme_vfio_user.o 00:01:46.632 CC lib/nvme/nvme_rdma.o 00:01:47.203 LIB libspdk_thread.a 00:01:47.203 SO libspdk_thread.so.10.0 00:01:47.203 SYMLINK libspdk_thread.so 00:01:47.464 CC lib/blob/blob_bs_dev.o 00:01:47.464 CC lib/blob/blobstore.o 00:01:47.464 CC lib/blob/request.o 00:01:47.464 CC lib/blob/zeroes.o 00:01:47.464 CC lib/vfu_tgt/tgt_endpoint.o 00:01:47.464 CC lib/init/json_config.o 00:01:47.464 CC lib/vfu_tgt/tgt_rpc.o 00:01:47.464 CC lib/init/subsystem.o 00:01:47.464 CC lib/init/subsystem_rpc.o 00:01:47.464 CC lib/init/rpc.o 00:01:47.464 CC lib/accel/accel.o 00:01:47.464 CC lib/accel/accel_rpc.o 00:01:47.464 CC lib/accel/accel_sw.o 00:01:47.464 CC lib/virtio/virtio.o 00:01:47.464 CC lib/virtio/virtio_vhost_user.o 00:01:47.464 CC lib/virtio/virtio_vfio_user.o 00:01:47.464 CC lib/virtio/virtio_pci.o 00:01:47.725 LIB libspdk_init.a 00:01:47.725 SO libspdk_init.so.5.0 00:01:47.986 LIB libspdk_vfu_tgt.a 00:01:47.986 SYMLINK libspdk_init.so 00:01:47.986 LIB libspdk_virtio.a 00:01:47.986 SO libspdk_vfu_tgt.so.3.0 00:01:47.986 SO libspdk_virtio.so.7.0 00:01:47.986 SYMLINK libspdk_vfu_tgt.so 00:01:47.986 SYMLINK libspdk_virtio.so 00:01:48.249 CC lib/event/app.o 00:01:48.249 CC lib/event/reactor.o 00:01:48.249 CC lib/event/log_rpc.o 00:01:48.249 CC lib/event/app_rpc.o 00:01:48.249 CC lib/event/scheduler_static.o 00:01:48.511 LIB libspdk_accel.a 00:01:48.511 LIB libspdk_nvme.a 00:01:48.511 SO libspdk_accel.so.15.0 00:01:48.511 SO libspdk_nvme.so.13.0 00:01:48.511 SYMLINK libspdk_accel.so 00:01:48.511 LIB libspdk_event.a 00:01:48.511 SO libspdk_event.so.13.0 00:01:48.773 SYMLINK libspdk_event.so 00:01:48.773 SYMLINK libspdk_nvme.so 00:01:48.773 CC lib/bdev/bdev.o 00:01:48.773 CC lib/bdev/bdev_rpc.o 00:01:48.773 CC lib/bdev/bdev_zone.o 00:01:49.036 CC lib/bdev/part.o 00:01:49.036 CC lib/bdev/scsi_nvme.o 00:01:49.980 LIB libspdk_blob.a 00:01:49.980 SO libspdk_blob.so.11.0 00:01:49.980 SYMLINK libspdk_blob.so 00:01:50.553 CC lib/blobfs/blobfs.o 00:01:50.553 CC lib/blobfs/tree.o 00:01:50.553 CC lib/lvol/lvol.o 00:01:51.126 LIB libspdk_blobfs.a 00:01:51.126 LIB libspdk_bdev.a 00:01:51.126 SO libspdk_blobfs.so.10.0 00:01:51.126 LIB libspdk_lvol.a 00:01:51.126 SO libspdk_bdev.so.15.0 00:01:51.126 SO libspdk_lvol.so.10.0 00:01:51.126 SYMLINK libspdk_blobfs.so 00:01:51.388 SYMLINK libspdk_lvol.so 00:01:51.388 SYMLINK libspdk_bdev.so 00:01:51.649 CC lib/nvmf/ctrlr.o 00:01:51.649 CC lib/nvmf/ctrlr_discovery.o 00:01:51.649 CC lib/nvmf/ctrlr_bdev.o 00:01:51.649 CC lib/nvmf/subsystem.o 00:01:51.649 CC lib/nvmf/nvmf.o 00:01:51.649 CC lib/nvmf/transport.o 00:01:51.649 CC lib/nvmf/nvmf_rpc.o 00:01:51.649 CC lib/nvmf/tcp.o 00:01:51.649 CC lib/nvmf/vfio_user.o 00:01:51.649 CC lib/nvmf/rdma.o 00:01:51.649 CC lib/ublk/ublk.o 00:01:51.649 CC lib/ublk/ublk_rpc.o 00:01:51.649 CC lib/scsi/dev.o 00:01:51.649 CC lib/scsi/lun.o 00:01:51.649 CC lib/scsi/port.o 00:01:51.649 CC lib/scsi/scsi.o 00:01:51.649 CC lib/scsi/scsi_bdev.o 00:01:51.649 CC lib/scsi/scsi_pr.o 00:01:51.649 CC lib/scsi/scsi_rpc.o 00:01:51.649 CC lib/nbd/nbd.o 00:01:51.649 CC lib/ftl/ftl_core.o 00:01:51.649 CC lib/nbd/nbd_rpc.o 00:01:51.649 CC lib/scsi/task.o 00:01:51.649 CC lib/ftl/ftl_init.o 00:01:51.649 CC lib/ftl/ftl_layout.o 00:01:51.649 CC lib/ftl/ftl_debug.o 00:01:51.649 CC lib/ftl/ftl_io.o 00:01:51.649 CC lib/ftl/ftl_sb.o 00:01:51.649 CC lib/ftl/ftl_l2p.o 00:01:51.649 CC lib/ftl/ftl_l2p_flat.o 00:01:51.649 CC lib/ftl/ftl_nv_cache.o 00:01:51.649 CC lib/ftl/ftl_band.o 00:01:51.649 CC lib/ftl/ftl_band_ops.o 00:01:51.649 CC lib/ftl/ftl_writer.o 00:01:51.649 CC lib/ftl/ftl_rq.o 00:01:51.649 CC lib/ftl/ftl_reloc.o 00:01:51.649 CC lib/ftl/ftl_p2l.o 00:01:51.649 CC lib/ftl/ftl_l2p_cache.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:51.649 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:51.649 CC lib/ftl/utils/ftl_conf.o 00:01:51.649 CC lib/ftl/utils/ftl_md.o 00:01:51.649 CC lib/ftl/utils/ftl_mempool.o 00:01:51.649 CC lib/ftl/utils/ftl_bitmap.o 00:01:51.649 CC lib/ftl/utils/ftl_property.o 00:01:51.649 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:51.649 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:51.649 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:51.649 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:51.649 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:51.649 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:51.649 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:51.649 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:51.649 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:51.649 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:51.649 CC lib/ftl/base/ftl_base_dev.o 00:01:51.649 CC lib/ftl/base/ftl_base_bdev.o 00:01:51.649 CC lib/ftl/ftl_trace.o 00:01:52.221 LIB libspdk_nbd.a 00:01:52.221 SO libspdk_nbd.so.7.0 00:01:52.221 LIB libspdk_scsi.a 00:01:52.221 SYMLINK libspdk_nbd.so 00:01:52.221 SO libspdk_scsi.so.9.0 00:01:52.221 LIB libspdk_ublk.a 00:01:52.221 SO libspdk_ublk.so.3.0 00:01:52.221 SYMLINK libspdk_scsi.so 00:01:52.481 SYMLINK libspdk_ublk.so 00:01:52.481 LIB libspdk_ftl.a 00:01:52.741 CC lib/iscsi/init_grp.o 00:01:52.741 CC lib/iscsi/conn.o 00:01:52.741 CC lib/iscsi/iscsi.o 00:01:52.741 CC lib/iscsi/md5.o 00:01:52.741 CC lib/vhost/vhost.o 00:01:52.741 CC lib/iscsi/param.o 00:01:52.741 CC lib/iscsi/portal_grp.o 00:01:52.741 CC lib/vhost/vhost_rpc.o 00:01:52.741 CC lib/vhost/vhost_scsi.o 00:01:52.741 CC lib/iscsi/tgt_node.o 00:01:52.741 CC lib/vhost/vhost_blk.o 00:01:52.741 CC lib/iscsi/iscsi_subsystem.o 00:01:52.741 CC lib/vhost/rte_vhost_user.o 00:01:52.741 CC lib/iscsi/iscsi_rpc.o 00:01:52.741 CC lib/iscsi/task.o 00:01:52.741 SO libspdk_ftl.so.9.0 00:01:53.000 SYMLINK libspdk_ftl.so 00:01:53.261 LIB libspdk_nvmf.a 00:01:53.521 SO libspdk_nvmf.so.18.0 00:01:53.521 LIB libspdk_vhost.a 00:01:53.521 SYMLINK libspdk_nvmf.so 00:01:53.521 SO libspdk_vhost.so.8.0 00:01:53.832 SYMLINK libspdk_vhost.so 00:01:53.832 LIB libspdk_iscsi.a 00:01:53.832 SO libspdk_iscsi.so.8.0 00:01:54.152 SYMLINK libspdk_iscsi.so 00:01:54.732 CC module/vfu_device/vfu_virtio.o 00:01:54.732 CC module/vfu_device/vfu_virtio_blk.o 00:01:54.732 CC module/vfu_device/vfu_virtio_rpc.o 00:01:54.732 CC module/vfu_device/vfu_virtio_scsi.o 00:01:54.732 CC module/env_dpdk/env_dpdk_rpc.o 00:01:54.732 LIB libspdk_env_dpdk_rpc.a 00:01:54.732 CC module/keyring/file/keyring_rpc.o 00:01:54.732 CC module/keyring/file/keyring.o 00:01:54.732 CC module/sock/posix/posix.o 00:01:54.732 CC module/accel/dsa/accel_dsa.o 00:01:54.732 CC module/accel/dsa/accel_dsa_rpc.o 00:01:54.732 CC module/accel/error/accel_error.o 00:01:54.732 CC module/accel/error/accel_error_rpc.o 00:01:54.732 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:54.732 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:54.732 CC module/accel/iaa/accel_iaa_rpc.o 00:01:54.732 CC module/accel/iaa/accel_iaa.o 00:01:54.732 CC module/scheduler/gscheduler/gscheduler.o 00:01:54.732 CC module/accel/ioat/accel_ioat.o 00:01:54.732 CC module/accel/ioat/accel_ioat_rpc.o 00:01:54.732 CC module/blob/bdev/blob_bdev.o 00:01:54.732 SO libspdk_env_dpdk_rpc.so.6.0 00:01:54.993 SYMLINK libspdk_env_dpdk_rpc.so 00:01:54.993 LIB libspdk_keyring_file.a 00:01:54.993 LIB libspdk_scheduler_gscheduler.a 00:01:54.993 LIB libspdk_scheduler_dpdk_governor.a 00:01:54.993 SO libspdk_keyring_file.so.1.0 00:01:54.993 LIB libspdk_accel_error.a 00:01:54.993 LIB libspdk_accel_ioat.a 00:01:54.993 LIB libspdk_scheduler_dynamic.a 00:01:54.993 LIB libspdk_accel_iaa.a 00:01:54.993 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:54.993 LIB libspdk_accel_dsa.a 00:01:54.993 SO libspdk_scheduler_gscheduler.so.4.0 00:01:54.993 SO libspdk_accel_error.so.2.0 00:01:54.993 SO libspdk_accel_ioat.so.6.0 00:01:54.993 SO libspdk_scheduler_dynamic.so.4.0 00:01:54.993 SYMLINK libspdk_keyring_file.so 00:01:54.993 SO libspdk_accel_dsa.so.5.0 00:01:54.993 LIB libspdk_blob_bdev.a 00:01:54.993 SO libspdk_accel_iaa.so.3.0 00:01:54.993 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:54.993 SYMLINK libspdk_scheduler_gscheduler.so 00:01:54.993 SO libspdk_blob_bdev.so.11.0 00:01:54.993 SYMLINK libspdk_accel_ioat.so 00:01:54.993 SYMLINK libspdk_accel_error.so 00:01:54.993 SYMLINK libspdk_scheduler_dynamic.so 00:01:55.254 SYMLINK libspdk_accel_dsa.so 00:01:55.254 SYMLINK libspdk_accel_iaa.so 00:01:55.254 LIB libspdk_vfu_device.a 00:01:55.254 SYMLINK libspdk_blob_bdev.so 00:01:55.254 SO libspdk_vfu_device.so.3.0 00:01:55.254 SYMLINK libspdk_vfu_device.so 00:01:55.514 LIB libspdk_sock_posix.a 00:01:55.514 SO libspdk_sock_posix.so.6.0 00:01:55.514 SYMLINK libspdk_sock_posix.so 00:01:55.775 CC module/blobfs/bdev/blobfs_bdev.o 00:01:55.775 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:55.775 CC module/bdev/delay/vbdev_delay.o 00:01:55.775 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:55.775 CC module/bdev/aio/bdev_aio.o 00:01:55.775 CC module/bdev/aio/bdev_aio_rpc.o 00:01:55.775 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:55.775 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:55.775 CC module/bdev/null/bdev_null.o 00:01:55.775 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:55.775 CC module/bdev/null/bdev_null_rpc.o 00:01:55.775 CC module/bdev/lvol/vbdev_lvol.o 00:01:55.775 CC module/bdev/passthru/vbdev_passthru.o 00:01:55.775 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:55.775 CC module/bdev/error/vbdev_error.o 00:01:55.775 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:55.775 CC module/bdev/error/vbdev_error_rpc.o 00:01:55.775 CC module/bdev/malloc/bdev_malloc.o 00:01:55.775 CC module/bdev/raid/bdev_raid.o 00:01:55.775 CC module/bdev/split/vbdev_split.o 00:01:55.775 CC module/bdev/ftl/bdev_ftl.o 00:01:55.775 CC module/bdev/gpt/gpt.o 00:01:55.775 CC module/bdev/raid/bdev_raid_rpc.o 00:01:55.775 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:55.775 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:55.775 CC module/bdev/split/vbdev_split_rpc.o 00:01:55.775 CC module/bdev/gpt/vbdev_gpt.o 00:01:55.775 CC module/bdev/iscsi/bdev_iscsi.o 00:01:55.775 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:55.775 CC module/bdev/raid/bdev_raid_sb.o 00:01:55.775 CC module/bdev/nvme/bdev_nvme.o 00:01:55.775 CC module/bdev/raid/raid0.o 00:01:55.775 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:55.775 CC module/bdev/raid/raid1.o 00:01:55.775 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:55.775 CC module/bdev/nvme/nvme_rpc.o 00:01:55.775 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:55.775 CC module/bdev/nvme/bdev_mdns_client.o 00:01:55.775 CC module/bdev/raid/concat.o 00:01:55.775 CC module/bdev/nvme/vbdev_opal.o 00:01:55.775 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:55.775 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:56.036 LIB libspdk_blobfs_bdev.a 00:01:56.036 SO libspdk_blobfs_bdev.so.6.0 00:01:56.036 SYMLINK libspdk_blobfs_bdev.so 00:01:56.036 LIB libspdk_bdev_split.a 00:01:56.036 LIB libspdk_bdev_null.a 00:01:56.036 LIB libspdk_bdev_error.a 00:01:56.036 SO libspdk_bdev_null.so.6.0 00:01:56.036 SO libspdk_bdev_split.so.6.0 00:01:56.036 LIB libspdk_bdev_passthru.a 00:01:56.036 LIB libspdk_bdev_gpt.a 00:01:56.036 SO libspdk_bdev_error.so.6.0 00:01:56.036 SO libspdk_bdev_passthru.so.6.0 00:01:56.036 LIB libspdk_bdev_ftl.a 00:01:56.036 LIB libspdk_bdev_aio.a 00:01:56.036 LIB libspdk_bdev_delay.a 00:01:56.036 SO libspdk_bdev_gpt.so.6.0 00:01:56.036 LIB libspdk_bdev_zone_block.a 00:01:56.036 LIB libspdk_bdev_malloc.a 00:01:56.036 SO libspdk_bdev_aio.so.6.0 00:01:56.036 SYMLINK libspdk_bdev_null.so 00:01:56.036 SYMLINK libspdk_bdev_error.so 00:01:56.036 SO libspdk_bdev_ftl.so.6.0 00:01:56.036 SYMLINK libspdk_bdev_split.so 00:01:56.036 LIB libspdk_bdev_iscsi.a 00:01:56.036 SO libspdk_bdev_delay.so.6.0 00:01:56.036 SYMLINK libspdk_bdev_passthru.so 00:01:56.036 SO libspdk_bdev_zone_block.so.6.0 00:01:56.036 SO libspdk_bdev_iscsi.so.6.0 00:01:56.036 SO libspdk_bdev_malloc.so.6.0 00:01:56.036 SYMLINK libspdk_bdev_gpt.so 00:01:56.298 SYMLINK libspdk_bdev_aio.so 00:01:56.298 SYMLINK libspdk_bdev_ftl.so 00:01:56.298 LIB libspdk_bdev_lvol.a 00:01:56.298 SYMLINK libspdk_bdev_delay.so 00:01:56.298 SYMLINK libspdk_bdev_zone_block.so 00:01:56.298 SYMLINK libspdk_bdev_malloc.so 00:01:56.298 SYMLINK libspdk_bdev_iscsi.so 00:01:56.298 SO libspdk_bdev_lvol.so.6.0 00:01:56.298 LIB libspdk_bdev_virtio.a 00:01:56.298 SO libspdk_bdev_virtio.so.6.0 00:01:56.298 SYMLINK libspdk_bdev_lvol.so 00:01:56.298 SYMLINK libspdk_bdev_virtio.so 00:01:56.559 LIB libspdk_bdev_raid.a 00:01:56.559 SO libspdk_bdev_raid.so.6.0 00:01:56.559 SYMLINK libspdk_bdev_raid.so 00:01:57.502 LIB libspdk_bdev_nvme.a 00:01:57.502 SO libspdk_bdev_nvme.so.7.0 00:01:57.762 SYMLINK libspdk_bdev_nvme.so 00:01:58.334 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:58.334 CC module/event/subsystems/iobuf/iobuf.o 00:01:58.334 CC module/event/subsystems/scheduler/scheduler.o 00:01:58.334 CC module/event/subsystems/vmd/vmd.o 00:01:58.334 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:58.334 CC module/event/subsystems/sock/sock.o 00:01:58.334 CC module/event/subsystems/keyring/keyring.o 00:01:58.334 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:58.334 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:58.334 LIB libspdk_event_iobuf.a 00:01:58.334 LIB libspdk_event_scheduler.a 00:01:58.595 LIB libspdk_event_vhost_blk.a 00:01:58.595 LIB libspdk_event_vfu_tgt.a 00:01:58.595 LIB libspdk_event_sock.a 00:01:58.595 LIB libspdk_event_keyring.a 00:01:58.595 LIB libspdk_event_vmd.a 00:01:58.595 SO libspdk_event_iobuf.so.3.0 00:01:58.596 SO libspdk_event_scheduler.so.4.0 00:01:58.596 SO libspdk_event_vhost_blk.so.3.0 00:01:58.596 SO libspdk_event_sock.so.5.0 00:01:58.596 SO libspdk_event_vfu_tgt.so.3.0 00:01:58.596 SO libspdk_event_keyring.so.1.0 00:01:58.596 SO libspdk_event_vmd.so.6.0 00:01:58.596 SYMLINK libspdk_event_iobuf.so 00:01:58.596 SYMLINK libspdk_event_scheduler.so 00:01:58.596 SYMLINK libspdk_event_sock.so 00:01:58.596 SYMLINK libspdk_event_vhost_blk.so 00:01:58.596 SYMLINK libspdk_event_keyring.so 00:01:58.596 SYMLINK libspdk_event_vfu_tgt.so 00:01:58.596 SYMLINK libspdk_event_vmd.so 00:01:58.856 CC module/event/subsystems/accel/accel.o 00:01:59.117 LIB libspdk_event_accel.a 00:01:59.117 SO libspdk_event_accel.so.6.0 00:01:59.117 SYMLINK libspdk_event_accel.so 00:01:59.378 CC module/event/subsystems/bdev/bdev.o 00:01:59.639 LIB libspdk_event_bdev.a 00:01:59.639 SO libspdk_event_bdev.so.6.0 00:01:59.639 SYMLINK libspdk_event_bdev.so 00:01:59.900 CC module/event/subsystems/scsi/scsi.o 00:01:59.900 CC module/event/subsystems/nbd/nbd.o 00:02:00.161 CC module/event/subsystems/ublk/ublk.o 00:02:00.161 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:00.161 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:00.161 LIB libspdk_event_nbd.a 00:02:00.161 LIB libspdk_event_scsi.a 00:02:00.161 LIB libspdk_event_ublk.a 00:02:00.161 SO libspdk_event_scsi.so.6.0 00:02:00.161 SO libspdk_event_nbd.so.6.0 00:02:00.161 SO libspdk_event_ublk.so.3.0 00:02:00.161 LIB libspdk_event_nvmf.a 00:02:00.161 SYMLINK libspdk_event_scsi.so 00:02:00.161 SYMLINK libspdk_event_nbd.so 00:02:00.421 SYMLINK libspdk_event_ublk.so 00:02:00.421 SO libspdk_event_nvmf.so.6.0 00:02:00.421 SYMLINK libspdk_event_nvmf.so 00:02:00.681 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:00.681 CC module/event/subsystems/iscsi/iscsi.o 00:02:00.681 LIB libspdk_event_vhost_scsi.a 00:02:00.942 LIB libspdk_event_iscsi.a 00:02:00.942 SO libspdk_event_vhost_scsi.so.3.0 00:02:00.942 SO libspdk_event_iscsi.so.6.0 00:02:00.942 SYMLINK libspdk_event_vhost_scsi.so 00:02:00.942 SYMLINK libspdk_event_iscsi.so 00:02:01.203 SO libspdk.so.6.0 00:02:01.203 SYMLINK libspdk.so 00:02:01.463 CC app/spdk_lspci/spdk_lspci.o 00:02:01.463 CC app/trace_record/trace_record.o 00:02:01.463 CC app/spdk_nvme_perf/perf.o 00:02:01.463 CC app/spdk_top/spdk_top.o 00:02:01.463 TEST_HEADER include/spdk/accel.h 00:02:01.463 CXX app/trace/trace.o 00:02:01.463 TEST_HEADER include/spdk/assert.h 00:02:01.463 TEST_HEADER include/spdk/bdev.h 00:02:01.463 TEST_HEADER include/spdk/accel_module.h 00:02:01.463 TEST_HEADER include/spdk/bdev_module.h 00:02:01.463 TEST_HEADER include/spdk/barrier.h 00:02:01.463 TEST_HEADER include/spdk/bdev_zone.h 00:02:01.463 TEST_HEADER include/spdk/base64.h 00:02:01.463 TEST_HEADER include/spdk/bit_array.h 00:02:01.463 TEST_HEADER include/spdk/blob_bdev.h 00:02:01.463 TEST_HEADER include/spdk/bit_pool.h 00:02:01.463 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:01.463 TEST_HEADER include/spdk/blobfs.h 00:02:01.463 CC test/rpc_client/rpc_client_test.o 00:02:01.463 TEST_HEADER include/spdk/conf.h 00:02:01.463 TEST_HEADER include/spdk/config.h 00:02:01.463 TEST_HEADER include/spdk/blob.h 00:02:01.463 CC app/spdk_nvme_discover/discovery_aer.o 00:02:01.463 TEST_HEADER include/spdk/cpuset.h 00:02:01.463 CC app/spdk_nvme_identify/identify.o 00:02:01.463 TEST_HEADER include/spdk/crc16.h 00:02:01.463 TEST_HEADER include/spdk/crc64.h 00:02:01.463 TEST_HEADER include/spdk/crc32.h 00:02:01.463 TEST_HEADER include/spdk/dif.h 00:02:01.463 TEST_HEADER include/spdk/dma.h 00:02:01.463 TEST_HEADER include/spdk/endian.h 00:02:01.463 TEST_HEADER include/spdk/env_dpdk.h 00:02:01.463 TEST_HEADER include/spdk/event.h 00:02:01.463 TEST_HEADER include/spdk/env.h 00:02:01.463 TEST_HEADER include/spdk/fd.h 00:02:01.463 TEST_HEADER include/spdk/fd_group.h 00:02:01.463 TEST_HEADER include/spdk/ftl.h 00:02:01.463 TEST_HEADER include/spdk/gpt_spec.h 00:02:01.732 TEST_HEADER include/spdk/hexlify.h 00:02:01.732 TEST_HEADER include/spdk/file.h 00:02:01.732 TEST_HEADER include/spdk/histogram_data.h 00:02:01.732 TEST_HEADER include/spdk/idxd.h 00:02:01.732 TEST_HEADER include/spdk/init.h 00:02:01.732 TEST_HEADER include/spdk/ioat.h 00:02:01.732 TEST_HEADER include/spdk/idxd_spec.h 00:02:01.732 TEST_HEADER include/spdk/ioat_spec.h 00:02:01.732 TEST_HEADER include/spdk/json.h 00:02:01.732 TEST_HEADER include/spdk/iscsi_spec.h 00:02:01.732 TEST_HEADER include/spdk/jsonrpc.h 00:02:01.732 CC app/vhost/vhost.o 00:02:01.732 TEST_HEADER include/spdk/likely.h 00:02:01.732 TEST_HEADER include/spdk/keyring_module.h 00:02:01.732 TEST_HEADER include/spdk/keyring.h 00:02:01.732 TEST_HEADER include/spdk/log.h 00:02:01.732 TEST_HEADER include/spdk/lvol.h 00:02:01.732 TEST_HEADER include/spdk/memory.h 00:02:01.732 TEST_HEADER include/spdk/mmio.h 00:02:01.732 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:01.732 CC app/spdk_dd/spdk_dd.o 00:02:01.732 TEST_HEADER include/spdk/nvme.h 00:02:01.732 TEST_HEADER include/spdk/nbd.h 00:02:01.732 TEST_HEADER include/spdk/notify.h 00:02:01.732 CC app/iscsi_tgt/iscsi_tgt.o 00:02:01.732 TEST_HEADER include/spdk/nvme_intel.h 00:02:01.732 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:01.732 TEST_HEADER include/spdk/nvme_spec.h 00:02:01.732 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:01.732 TEST_HEADER include/spdk/nvme_zns.h 00:02:01.732 CC app/spdk_tgt/spdk_tgt.o 00:02:01.732 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:01.732 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:01.732 TEST_HEADER include/spdk/nvmf.h 00:02:01.732 TEST_HEADER include/spdk/nvmf_transport.h 00:02:01.732 TEST_HEADER include/spdk/nvmf_spec.h 00:02:01.732 TEST_HEADER include/spdk/opal.h 00:02:01.732 CC app/nvmf_tgt/nvmf_main.o 00:02:01.732 TEST_HEADER include/spdk/opal_spec.h 00:02:01.732 TEST_HEADER include/spdk/pipe.h 00:02:01.732 TEST_HEADER include/spdk/pci_ids.h 00:02:01.732 TEST_HEADER include/spdk/reduce.h 00:02:01.732 TEST_HEADER include/spdk/queue.h 00:02:01.732 TEST_HEADER include/spdk/rpc.h 00:02:01.732 TEST_HEADER include/spdk/scheduler.h 00:02:01.732 TEST_HEADER include/spdk/scsi_spec.h 00:02:01.732 TEST_HEADER include/spdk/sock.h 00:02:01.732 TEST_HEADER include/spdk/scsi.h 00:02:01.732 TEST_HEADER include/spdk/stdinc.h 00:02:01.732 TEST_HEADER include/spdk/string.h 00:02:01.732 TEST_HEADER include/spdk/thread.h 00:02:01.732 TEST_HEADER include/spdk/tree.h 00:02:01.732 TEST_HEADER include/spdk/trace.h 00:02:01.732 TEST_HEADER include/spdk/trace_parser.h 00:02:01.732 TEST_HEADER include/spdk/ublk.h 00:02:01.732 TEST_HEADER include/spdk/util.h 00:02:01.732 TEST_HEADER include/spdk/uuid.h 00:02:01.732 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:01.732 TEST_HEADER include/spdk/version.h 00:02:01.732 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:01.732 TEST_HEADER include/spdk/vhost.h 00:02:01.732 TEST_HEADER include/spdk/vmd.h 00:02:01.732 TEST_HEADER include/spdk/xor.h 00:02:01.732 TEST_HEADER include/spdk/zipf.h 00:02:01.732 CXX test/cpp_headers/accel.o 00:02:01.732 CXX test/cpp_headers/accel_module.o 00:02:01.732 CXX test/cpp_headers/assert.o 00:02:01.732 CXX test/cpp_headers/barrier.o 00:02:01.732 CXX test/cpp_headers/base64.o 00:02:01.732 CXX test/cpp_headers/bdev.o 00:02:01.732 CXX test/cpp_headers/bdev_module.o 00:02:01.732 CXX test/cpp_headers/bdev_zone.o 00:02:01.732 CXX test/cpp_headers/bit_array.o 00:02:01.732 CXX test/cpp_headers/blob_bdev.o 00:02:01.732 CXX test/cpp_headers/bit_pool.o 00:02:01.732 CXX test/cpp_headers/blobfs_bdev.o 00:02:01.732 CXX test/cpp_headers/blobfs.o 00:02:01.732 CXX test/cpp_headers/blob.o 00:02:01.732 CXX test/cpp_headers/config.o 00:02:01.732 CXX test/cpp_headers/conf.o 00:02:01.732 CXX test/cpp_headers/crc16.o 00:02:01.732 CXX test/cpp_headers/cpuset.o 00:02:01.732 CXX test/cpp_headers/crc32.o 00:02:01.732 CXX test/cpp_headers/crc64.o 00:02:01.732 CXX test/cpp_headers/dif.o 00:02:01.732 CXX test/cpp_headers/dma.o 00:02:01.732 CXX test/cpp_headers/endian.o 00:02:01.732 CXX test/cpp_headers/env_dpdk.o 00:02:01.732 CXX test/cpp_headers/env.o 00:02:01.732 CXX test/cpp_headers/event.o 00:02:01.732 CXX test/cpp_headers/fd_group.o 00:02:01.732 CXX test/cpp_headers/fd.o 00:02:01.732 CXX test/cpp_headers/file.o 00:02:01.732 CXX test/cpp_headers/ftl.o 00:02:01.732 CXX test/cpp_headers/gpt_spec.o 00:02:01.732 CXX test/cpp_headers/histogram_data.o 00:02:01.732 CXX test/cpp_headers/hexlify.o 00:02:01.732 CXX test/cpp_headers/idxd_spec.o 00:02:01.732 CXX test/cpp_headers/idxd.o 00:02:01.732 CXX test/cpp_headers/init.o 00:02:01.732 CXX test/cpp_headers/ioat.o 00:02:01.732 CXX test/cpp_headers/ioat_spec.o 00:02:01.732 CXX test/cpp_headers/iscsi_spec.o 00:02:01.732 CXX test/cpp_headers/jsonrpc.o 00:02:01.732 CXX test/cpp_headers/json.o 00:02:01.732 CXX test/cpp_headers/keyring_module.o 00:02:01.732 CXX test/cpp_headers/keyring.o 00:02:01.732 CXX test/cpp_headers/likely.o 00:02:01.732 CXX test/cpp_headers/lvol.o 00:02:01.732 CXX test/cpp_headers/memory.o 00:02:01.732 CXX test/cpp_headers/mmio.o 00:02:01.732 CXX test/cpp_headers/log.o 00:02:01.732 CXX test/cpp_headers/nbd.o 00:02:01.732 CXX test/cpp_headers/notify.o 00:02:01.732 CXX test/cpp_headers/nvme.o 00:02:01.732 CXX test/cpp_headers/nvme_intel.o 00:02:01.732 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:01.732 CXX test/cpp_headers/nvme_ocssd.o 00:02:01.732 CXX test/cpp_headers/nvme_spec.o 00:02:01.732 CXX test/cpp_headers/nvme_zns.o 00:02:01.732 CXX test/cpp_headers/nvmf_cmd.o 00:02:01.732 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:01.732 CXX test/cpp_headers/nvmf.o 00:02:01.732 CXX test/cpp_headers/nvmf_transport.o 00:02:01.732 CXX test/cpp_headers/nvmf_spec.o 00:02:01.732 CXX test/cpp_headers/opal.o 00:02:01.732 CXX test/cpp_headers/opal_spec.o 00:02:01.732 CXX test/cpp_headers/pci_ids.o 00:02:01.732 CXX test/cpp_headers/queue.o 00:02:01.732 CXX test/cpp_headers/pipe.o 00:02:01.732 CXX test/cpp_headers/reduce.o 00:02:01.732 CXX test/cpp_headers/rpc.o 00:02:01.732 CXX test/cpp_headers/scheduler.o 00:02:01.732 CXX test/cpp_headers/scsi.o 00:02:01.732 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:01.732 CC examples/nvme/hotplug/hotplug.o 00:02:01.732 CC examples/nvme/reconnect/reconnect.o 00:02:01.732 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:01.732 CC examples/nvme/arbitration/arbitration.o 00:02:01.732 CC examples/ioat/verify/verify.o 00:02:01.732 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:01.732 CC examples/ioat/perf/perf.o 00:02:01.732 CC examples/nvme/abort/abort.o 00:02:01.732 CC examples/vmd/lsvmd/lsvmd.o 00:02:01.732 CC examples/sock/hello_world/hello_sock.o 00:02:01.732 CC test/nvme/aer/aer.o 00:02:01.732 CC examples/nvme/hello_world/hello_world.o 00:02:01.733 CC test/event/reactor_perf/reactor_perf.o 00:02:01.733 CXX test/cpp_headers/scsi_spec.o 00:02:01.733 CC test/app/jsoncat/jsoncat.o 00:02:01.733 CC test/env/pci/pci_ut.o 00:02:01.733 CC test/event/reactor/reactor.o 00:02:01.733 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:01.733 CC test/app/histogram_perf/histogram_perf.o 00:02:01.733 CC examples/bdev/hello_world/hello_bdev.o 00:02:01.733 CC test/env/memory/memory_ut.o 00:02:01.733 CC examples/vmd/led/led.o 00:02:01.733 CC test/app/stub/stub.o 00:02:01.733 CC examples/util/zipf/zipf.o 00:02:02.003 CC examples/accel/perf/accel_perf.o 00:02:02.003 CC test/nvme/reset/reset.o 00:02:02.003 CC test/event/event_perf/event_perf.o 00:02:02.003 CC test/nvme/e2edp/nvme_dp.o 00:02:02.003 CC test/thread/poller_perf/poller_perf.o 00:02:02.003 CC app/fio/nvme/fio_plugin.o 00:02:02.003 CC examples/bdev/bdevperf/bdevperf.o 00:02:02.003 CC examples/idxd/perf/perf.o 00:02:02.003 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:02.003 CC test/nvme/compliance/nvme_compliance.o 00:02:02.003 CC test/nvme/sgl/sgl.o 00:02:02.003 CC test/nvme/err_injection/err_injection.o 00:02:02.003 CC test/nvme/fdp/fdp.o 00:02:02.004 CC test/nvme/overhead/overhead.o 00:02:02.004 CC test/nvme/startup/startup.o 00:02:02.004 CC test/nvme/fused_ordering/fused_ordering.o 00:02:02.004 CC examples/nvmf/nvmf/nvmf.o 00:02:02.004 CC test/nvme/boot_partition/boot_partition.o 00:02:02.004 CC test/nvme/cuse/cuse.o 00:02:02.004 CC test/nvme/connect_stress/connect_stress.o 00:02:02.004 CC test/accel/dif/dif.o 00:02:02.004 CC test/env/vtophys/vtophys.o 00:02:02.004 CC test/event/app_repeat/app_repeat.o 00:02:02.004 CC test/nvme/simple_copy/simple_copy.o 00:02:02.004 CC examples/thread/thread/thread_ex.o 00:02:02.004 CC test/nvme/reserve/reserve.o 00:02:02.004 CC app/fio/bdev/fio_plugin.o 00:02:02.004 CC test/app/bdev_svc/bdev_svc.o 00:02:02.004 CC examples/blob/hello_world/hello_blob.o 00:02:02.004 LINK spdk_lspci 00:02:02.004 CC test/blobfs/mkfs/mkfs.o 00:02:02.004 CC test/event/scheduler/scheduler.o 00:02:02.004 CC examples/blob/cli/blobcli.o 00:02:02.004 CC test/dma/test_dma/test_dma.o 00:02:02.004 CC test/bdev/bdevio/bdevio.o 00:02:02.004 LINK rpc_client_test 00:02:02.266 LINK spdk_nvme_discover 00:02:02.266 LINK vhost 00:02:02.266 LINK nvmf_tgt 00:02:02.266 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:02.266 LINK spdk_trace_record 00:02:02.266 LINK interrupt_tgt 00:02:02.266 CC test/env/mem_callbacks/mem_callbacks.o 00:02:02.266 LINK spdk_tgt 00:02:02.266 CC test/lvol/esnap/esnap.o 00:02:02.525 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:02.525 LINK lsvmd 00:02:02.525 LINK pmr_persistence 00:02:02.525 LINK iscsi_tgt 00:02:02.525 LINK reactor 00:02:02.525 LINK jsoncat 00:02:02.525 LINK event_perf 00:02:02.525 LINK reactor_perf 00:02:02.525 LINK cmb_copy 00:02:02.525 LINK vtophys 00:02:02.525 LINK poller_perf 00:02:02.525 CXX test/cpp_headers/sock.o 00:02:02.525 LINK histogram_perf 00:02:02.525 LINK env_dpdk_post_init 00:02:02.525 LINK led 00:02:02.525 CXX test/cpp_headers/stdinc.o 00:02:02.525 LINK verify 00:02:02.525 CXX test/cpp_headers/string.o 00:02:02.525 LINK zipf 00:02:02.525 CXX test/cpp_headers/thread.o 00:02:02.525 LINK boot_partition 00:02:02.525 LINK stub 00:02:02.525 CXX test/cpp_headers/trace.o 00:02:02.525 CXX test/cpp_headers/trace_parser.o 00:02:02.525 CXX test/cpp_headers/tree.o 00:02:02.525 CXX test/cpp_headers/ublk.o 00:02:02.525 LINK startup 00:02:02.525 LINK hello_sock 00:02:02.526 CXX test/cpp_headers/util.o 00:02:02.526 CXX test/cpp_headers/uuid.o 00:02:02.526 LINK app_repeat 00:02:02.526 CXX test/cpp_headers/version.o 00:02:02.526 CXX test/cpp_headers/vfio_user_pci.o 00:02:02.526 LINK connect_stress 00:02:02.526 CXX test/cpp_headers/vfio_user_spec.o 00:02:02.526 CXX test/cpp_headers/vhost.o 00:02:02.526 CXX test/cpp_headers/vmd.o 00:02:02.526 CXX test/cpp_headers/xor.o 00:02:02.526 LINK ioat_perf 00:02:02.526 CXX test/cpp_headers/zipf.o 00:02:02.526 LINK doorbell_aers 00:02:02.526 LINK hello_bdev 00:02:02.526 LINK err_injection 00:02:02.526 LINK bdev_svc 00:02:02.526 LINK hotplug 00:02:02.526 LINK hello_world 00:02:02.526 LINK fused_ordering 00:02:02.526 LINK mkfs 00:02:02.526 LINK spdk_dd 00:02:02.785 LINK reset 00:02:02.785 LINK reserve 00:02:02.785 LINK simple_copy 00:02:02.785 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:02.785 LINK sgl 00:02:02.785 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:02.785 LINK nvme_dp 00:02:02.785 LINK hello_blob 00:02:02.785 LINK scheduler 00:02:02.785 LINK aer 00:02:02.785 LINK thread 00:02:02.785 LINK nvme_compliance 00:02:02.785 LINK idxd_perf 00:02:02.785 LINK fdp 00:02:02.785 LINK overhead 00:02:02.785 LINK nvmf 00:02:02.785 LINK arbitration 00:02:02.785 LINK spdk_trace 00:02:02.785 LINK reconnect 00:02:02.785 LINK abort 00:02:02.785 LINK dif 00:02:02.785 LINK pci_ut 00:02:02.785 LINK nvme_manage 00:02:02.785 LINK test_dma 00:02:02.785 LINK spdk_nvme 00:02:02.785 LINK bdevio 00:02:03.045 LINK accel_perf 00:02:03.045 LINK blobcli 00:02:03.045 LINK nvme_fuzz 00:02:03.045 LINK spdk_bdev 00:02:03.045 LINK spdk_nvme_identify 00:02:03.045 LINK spdk_nvme_perf 00:02:03.045 LINK vhost_fuzz 00:02:03.045 LINK spdk_top 00:02:03.045 LINK mem_callbacks 00:02:03.305 LINK bdevperf 00:02:03.305 LINK memory_ut 00:02:03.305 LINK cuse 00:02:03.566 LINK iscsi_fuzz 00:02:06.860 LINK esnap 00:02:06.860 00:02:06.860 real 0m49.517s 00:02:06.860 user 6m32.464s 00:02:06.860 sys 4m35.436s 00:02:06.860 23:46:36 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:06.860 23:46:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.860 ************************************ 00:02:06.860 END TEST make 00:02:06.860 ************************************ 00:02:06.860 23:46:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:06.860 23:46:36 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:06.860 23:46:36 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:06.860 23:46:36 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.860 23:46:36 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:06.860 23:46:36 -- pm/common@45 -- $ pid=59115 00:02:06.860 23:46:36 -- pm/common@52 -- $ sudo kill -TERM 59115 00:02:06.860 23:46:36 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.860 23:46:36 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:06.860 23:46:36 -- pm/common@45 -- $ pid=59117 00:02:06.861 23:46:36 -- pm/common@52 -- $ sudo kill -TERM 59117 00:02:06.861 23:46:36 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.861 23:46:36 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:06.861 23:46:36 -- pm/common@45 -- $ pid=59118 00:02:06.861 23:46:36 -- pm/common@52 -- $ sudo kill -TERM 59118 00:02:06.861 23:46:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.861 23:46:37 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:06.861 23:46:37 -- pm/common@45 -- $ pid=59119 00:02:06.861 23:46:37 -- pm/common@52 -- $ sudo kill -TERM 59119 00:02:07.121 23:46:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:07.121 23:46:37 -- nvmf/common.sh@7 -- # uname -s 00:02:07.121 23:46:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:07.122 23:46:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:07.122 23:46:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:07.122 23:46:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:07.122 23:46:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:07.122 23:46:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:07.122 23:46:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:07.122 23:46:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:07.122 23:46:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:07.122 23:46:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:07.122 23:46:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:07.122 23:46:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:07.122 23:46:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:07.122 23:46:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:07.122 23:46:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:07.122 23:46:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:07.122 23:46:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:07.122 23:46:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:07.122 23:46:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.122 23:46:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.122 23:46:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.122 23:46:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.122 23:46:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.122 23:46:37 -- paths/export.sh@5 -- # export PATH 00:02:07.122 23:46:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.122 23:46:37 -- nvmf/common.sh@47 -- # : 0 00:02:07.122 23:46:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:07.122 23:46:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:07.122 23:46:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:07.122 23:46:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:07.122 23:46:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:07.122 23:46:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:07.122 23:46:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:07.122 23:46:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:07.122 23:46:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:07.122 23:46:37 -- spdk/autotest.sh@32 -- # uname -s 00:02:07.122 23:46:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:07.122 23:46:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:07.122 23:46:37 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:07.122 23:46:37 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:07.122 23:46:37 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:07.122 23:46:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:07.122 23:46:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:07.122 23:46:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:07.122 23:46:37 -- spdk/autotest.sh@48 -- # udevadm_pid=121313 00:02:07.122 23:46:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:07.122 23:46:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:07.122 23:46:37 -- pm/common@17 -- # local monitor 00:02:07.122 23:46:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.122 23:46:37 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=121315 00:02:07.122 23:46:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.122 23:46:37 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=121318 00:02:07.122 23:46:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.122 23:46:37 -- pm/common@21 -- # date +%s 00:02:07.122 23:46:37 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=121320 00:02:07.122 23:46:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.122 23:46:37 -- pm/common@21 -- # date +%s 00:02:07.122 23:46:37 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=121323 00:02:07.122 23:46:37 -- pm/common@26 -- # sleep 1 00:02:07.122 23:46:37 -- pm/common@21 -- # date +%s 00:02:07.122 23:46:37 -- pm/common@21 -- # date +%s 00:02:07.122 23:46:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714167997 00:02:07.122 23:46:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714167997 00:02:07.122 23:46:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714167997 00:02:07.122 23:46:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714167997 00:02:07.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714167997_collect-vmstat.pm.log 00:02:07.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714167997_collect-cpu-load.pm.log 00:02:07.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714167997_collect-bmc-pm.bmc.pm.log 00:02:07.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714167997_collect-cpu-temp.pm.log 00:02:08.066 23:46:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:08.066 23:46:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:08.066 23:46:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:08.066 23:46:38 -- common/autotest_common.sh@10 -- # set +x 00:02:08.066 23:46:38 -- spdk/autotest.sh@59 -- # create_test_list 00:02:08.066 23:46:38 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:08.066 23:46:38 -- common/autotest_common.sh@10 -- # set +x 00:02:08.066 23:46:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:08.326 23:46:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.326 23:46:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.326 23:46:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:08.326 23:46:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.326 23:46:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:08.326 23:46:38 -- common/autotest_common.sh@1441 -- # uname 00:02:08.326 23:46:38 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:08.326 23:46:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:08.326 23:46:38 -- common/autotest_common.sh@1461 -- # uname 00:02:08.326 23:46:38 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:08.326 23:46:38 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:08.326 23:46:38 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:08.326 23:46:38 -- spdk/autotest.sh@72 -- # hash lcov 00:02:08.326 23:46:38 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:08.326 23:46:38 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:08.326 --rc lcov_branch_coverage=1 00:02:08.326 --rc lcov_function_coverage=1 00:02:08.326 --rc genhtml_branch_coverage=1 00:02:08.326 --rc genhtml_function_coverage=1 00:02:08.326 --rc genhtml_legend=1 00:02:08.326 --rc geninfo_all_blocks=1 00:02:08.326 ' 00:02:08.326 23:46:38 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:08.326 --rc lcov_branch_coverage=1 00:02:08.326 --rc lcov_function_coverage=1 00:02:08.326 --rc genhtml_branch_coverage=1 00:02:08.326 --rc genhtml_function_coverage=1 00:02:08.326 --rc genhtml_legend=1 00:02:08.326 --rc geninfo_all_blocks=1 00:02:08.326 ' 00:02:08.327 23:46:38 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:08.327 --rc lcov_branch_coverage=1 00:02:08.327 --rc lcov_function_coverage=1 00:02:08.327 --rc genhtml_branch_coverage=1 00:02:08.327 --rc genhtml_function_coverage=1 00:02:08.327 --rc genhtml_legend=1 00:02:08.327 --rc geninfo_all_blocks=1 00:02:08.327 --no-external' 00:02:08.327 23:46:38 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:08.327 --rc lcov_branch_coverage=1 00:02:08.327 --rc lcov_function_coverage=1 00:02:08.327 --rc genhtml_branch_coverage=1 00:02:08.327 --rc genhtml_function_coverage=1 00:02:08.327 --rc genhtml_legend=1 00:02:08.327 --rc geninfo_all_blocks=1 00:02:08.327 --no-external' 00:02:08.327 23:46:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:08.327 lcov: LCOV version 1.14 00:02:08.327 23:46:38 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:16.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:16.475 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:16.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:16.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:16.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:20.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:20.680 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:30.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:30.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:30.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:30.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:30.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:30.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:37.274 23:47:06 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:37.274 23:47:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:37.274 23:47:06 -- common/autotest_common.sh@10 -- # set +x 00:02:37.274 23:47:06 -- spdk/autotest.sh@91 -- # rm -f 00:02:37.274 23:47:06 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.617 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:40.617 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:40.617 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:40.617 23:47:10 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:40.937 23:47:10 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:40.937 23:47:10 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:40.938 23:47:10 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:40.938 23:47:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:40.938 23:47:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:40.938 23:47:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:40.938 23:47:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:40.938 23:47:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:40.938 23:47:10 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:40.938 23:47:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:40.938 23:47:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:40.938 23:47:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:40.938 23:47:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:40.938 23:47:10 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:40.938 No valid GPT data, bailing 00:02:40.938 23:47:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:40.938 23:47:10 -- scripts/common.sh@391 -- # pt= 00:02:40.938 23:47:10 -- scripts/common.sh@392 -- # return 1 00:02:40.938 23:47:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:40.938 1+0 records in 00:02:40.938 1+0 records out 00:02:40.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00208716 s, 502 MB/s 00:02:40.938 23:47:10 -- spdk/autotest.sh@118 -- # sync 00:02:40.938 23:47:10 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:40.938 23:47:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:40.938 23:47:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:49.127 23:47:18 -- spdk/autotest.sh@124 -- # uname -s 00:02:49.127 23:47:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:49.127 23:47:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:49.127 23:47:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:49.127 23:47:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:49.127 23:47:18 -- common/autotest_common.sh@10 -- # set +x 00:02:49.127 ************************************ 00:02:49.127 START TEST setup.sh 00:02:49.127 ************************************ 00:02:49.127 23:47:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:49.127 * Looking for test storage... 00:02:49.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:49.127 23:47:19 -- setup/test-setup.sh@10 -- # uname -s 00:02:49.127 23:47:19 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:49.127 23:47:19 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:49.127 23:47:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:49.127 23:47:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:49.127 23:47:19 -- common/autotest_common.sh@10 -- # set +x 00:02:49.127 ************************************ 00:02:49.127 START TEST acl 00:02:49.127 ************************************ 00:02:49.127 23:47:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:49.127 * Looking for test storage... 00:02:49.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:49.127 23:47:19 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:49.127 23:47:19 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:49.127 23:47:19 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:49.127 23:47:19 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:49.127 23:47:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:49.127 23:47:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:49.127 23:47:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:49.127 23:47:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:49.127 23:47:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:49.127 23:47:19 -- setup/acl.sh@12 -- # devs=() 00:02:49.127 23:47:19 -- setup/acl.sh@12 -- # declare -a devs 00:02:49.127 23:47:19 -- setup/acl.sh@13 -- # drivers=() 00:02:49.127 23:47:19 -- setup/acl.sh@13 -- # declare -A drivers 00:02:49.127 23:47:19 -- setup/acl.sh@51 -- # setup reset 00:02:49.127 23:47:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:49.127 23:47:19 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.333 23:47:23 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:53.333 23:47:23 -- setup/acl.sh@16 -- # local dev driver 00:02:53.333 23:47:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.333 23:47:23 -- setup/acl.sh@15 -- # setup output status 00:02:53.333 23:47:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.333 23:47:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:55.877 Hugepages 00:02:55.877 node hugesize free / total 00:02:55.877 23:47:26 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:55.877 23:47:26 -- setup/acl.sh@19 -- # continue 00:02:55.877 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.877 23:47:26 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:55.877 23:47:26 -- setup/acl.sh@19 -- # continue 00:02:55.877 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 00:02:56.138 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:56.138 23:47:26 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.138 23:47:26 -- setup/acl.sh@20 -- # continue 00:02:56.138 23:47:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.138 23:47:26 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:56.138 23:47:26 -- setup/acl.sh@54 -- # run_test denied denied 00:02:56.138 23:47:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:56.138 23:47:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:56.138 23:47:26 -- common/autotest_common.sh@10 -- # set +x 00:02:56.399 ************************************ 00:02:56.399 START TEST denied 00:02:56.399 ************************************ 00:02:56.399 23:47:26 -- common/autotest_common.sh@1111 -- # denied 00:02:56.399 23:47:26 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:56.399 23:47:26 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:56.399 23:47:26 -- setup/acl.sh@38 -- # setup output config 00:02:56.399 23:47:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.399 23:47:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.700 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:02:59.700 23:47:29 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:02:59.700 23:47:29 -- setup/acl.sh@28 -- # local dev driver 00:02:59.700 23:47:29 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:59.700 23:47:29 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:02:59.700 23:47:29 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:02:59.700 23:47:29 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:59.700 23:47:29 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:59.700 23:47:29 -- setup/acl.sh@41 -- # setup reset 00:02:59.700 23:47:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.700 23:47:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.988 00:03:04.988 real 0m7.703s 00:03:04.988 user 0m2.355s 00:03:04.988 sys 0m4.484s 00:03:04.988 23:47:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:04.988 23:47:34 -- common/autotest_common.sh@10 -- # set +x 00:03:04.988 ************************************ 00:03:04.988 END TEST denied 00:03:04.988 ************************************ 00:03:04.988 23:47:34 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:04.988 23:47:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:04.988 23:47:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:04.988 23:47:34 -- common/autotest_common.sh@10 -- # set +x 00:03:04.988 ************************************ 00:03:04.988 START TEST allowed 00:03:04.988 ************************************ 00:03:04.988 23:47:34 -- common/autotest_common.sh@1111 -- # allowed 00:03:04.988 23:47:34 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:04.988 23:47:34 -- setup/acl.sh@45 -- # setup output config 00:03:04.988 23:47:34 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:04.988 23:47:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.988 23:47:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.274 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:10.274 23:47:39 -- setup/acl.sh@47 -- # verify 00:03:10.274 23:47:39 -- setup/acl.sh@28 -- # local dev driver 00:03:10.274 23:47:39 -- setup/acl.sh@48 -- # setup reset 00:03:10.274 23:47:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.274 23:47:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.492 00:03:14.492 real 0m9.505s 00:03:14.492 user 0m2.861s 00:03:14.492 sys 0m4.921s 00:03:14.492 23:47:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:14.492 23:47:43 -- common/autotest_common.sh@10 -- # set +x 00:03:14.492 ************************************ 00:03:14.492 END TEST allowed 00:03:14.492 ************************************ 00:03:14.492 00:03:14.492 real 0m24.738s 00:03:14.492 user 0m7.815s 00:03:14.492 sys 0m14.347s 00:03:14.492 23:47:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:14.492 23:47:43 -- common/autotest_common.sh@10 -- # set +x 00:03:14.492 ************************************ 00:03:14.492 END TEST acl 00:03:14.492 ************************************ 00:03:14.493 23:47:43 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:14.493 23:47:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:14.493 23:47:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:14.493 23:47:43 -- common/autotest_common.sh@10 -- # set +x 00:03:14.493 ************************************ 00:03:14.493 START TEST hugepages 00:03:14.493 ************************************ 00:03:14.493 23:47:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:14.493 * Looking for test storage... 00:03:14.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:14.493 23:47:44 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:14.493 23:47:44 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:14.493 23:47:44 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:14.493 23:47:44 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:14.493 23:47:44 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:14.493 23:47:44 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:14.493 23:47:44 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:14.493 23:47:44 -- setup/common.sh@18 -- # local node= 00:03:14.493 23:47:44 -- setup/common.sh@19 -- # local var val 00:03:14.493 23:47:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.493 23:47:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.493 23:47:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.493 23:47:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.493 23:47:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.493 23:47:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 105873976 kB' 'MemAvailable: 109412144 kB' 'Buffers: 4124 kB' 'Cached: 11676048 kB' 'SwapCached: 0 kB' 'Active: 8782672 kB' 'Inactive: 3515796 kB' 'Active(anon): 8092276 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621640 kB' 'Mapped: 214640 kB' 'Shmem: 7473980 kB' 'KReclaimable: 312612 kB' 'Slab: 1127064 kB' 'SReclaimable: 312612 kB' 'SUnreclaim: 814452 kB' 'KernelStack: 27152 kB' 'PageTables: 9312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460884 kB' 'Committed_AS: 9502164 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234572 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.493 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.493 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # continue 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.494 23:47:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.494 23:47:44 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.494 23:47:44 -- setup/common.sh@33 -- # echo 2048 00:03:14.494 23:47:44 -- setup/common.sh@33 -- # return 0 00:03:14.494 23:47:44 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:14.494 23:47:44 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:14.494 23:47:44 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:14.494 23:47:44 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:14.494 23:47:44 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:14.494 23:47:44 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:14.494 23:47:44 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:14.494 23:47:44 -- setup/hugepages.sh@207 -- # get_nodes 00:03:14.494 23:47:44 -- setup/hugepages.sh@27 -- # local node 00:03:14.494 23:47:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.494 23:47:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:14.494 23:47:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.494 23:47:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:14.494 23:47:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.494 23:47:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.494 23:47:44 -- setup/hugepages.sh@208 -- # clear_hp 00:03:14.494 23:47:44 -- setup/hugepages.sh@37 -- # local node hp 00:03:14.494 23:47:44 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:14.494 23:47:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.494 23:47:44 -- setup/hugepages.sh@41 -- # echo 0 00:03:14.494 23:47:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.494 23:47:44 -- setup/hugepages.sh@41 -- # echo 0 00:03:14.494 23:47:44 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:14.494 23:47:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.494 23:47:44 -- setup/hugepages.sh@41 -- # echo 0 00:03:14.494 23:47:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.494 23:47:44 -- setup/hugepages.sh@41 -- # echo 0 00:03:14.494 23:47:44 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:14.494 23:47:44 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:14.494 23:47:44 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:14.494 23:47:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:14.494 23:47:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:14.494 23:47:44 -- common/autotest_common.sh@10 -- # set +x 00:03:14.494 ************************************ 00:03:14.494 START TEST default_setup 00:03:14.494 ************************************ 00:03:14.494 23:47:44 -- common/autotest_common.sh@1111 -- # default_setup 00:03:14.494 23:47:44 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:14.494 23:47:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:14.494 23:47:44 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:14.494 23:47:44 -- setup/hugepages.sh@51 -- # shift 00:03:14.494 23:47:44 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:14.494 23:47:44 -- setup/hugepages.sh@52 -- # local node_ids 00:03:14.494 23:47:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.494 23:47:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:14.494 23:47:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:14.494 23:47:44 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:14.494 23:47:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.494 23:47:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:14.494 23:47:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.494 23:47:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.494 23:47:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.494 23:47:44 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:14.494 23:47:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.494 23:47:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:14.494 23:47:44 -- setup/hugepages.sh@73 -- # return 0 00:03:14.494 23:47:44 -- setup/hugepages.sh@137 -- # setup output 00:03:14.494 23:47:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.494 23:47:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.799 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:17.799 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:18.062 23:47:48 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:18.062 23:47:48 -- setup/hugepages.sh@89 -- # local node 00:03:18.062 23:47:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.062 23:47:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.062 23:47:48 -- setup/hugepages.sh@92 -- # local surp 00:03:18.062 23:47:48 -- setup/hugepages.sh@93 -- # local resv 00:03:18.062 23:47:48 -- setup/hugepages.sh@94 -- # local anon 00:03:18.062 23:47:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.062 23:47:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.062 23:47:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.062 23:47:48 -- setup/common.sh@18 -- # local node= 00:03:18.062 23:47:48 -- setup/common.sh@19 -- # local var val 00:03:18.062 23:47:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.062 23:47:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.062 23:47:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.062 23:47:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.062 23:47:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.062 23:47:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.062 23:47:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108113876 kB' 'MemAvailable: 111652040 kB' 'Buffers: 4124 kB' 'Cached: 11676184 kB' 'SwapCached: 0 kB' 'Active: 8797032 kB' 'Inactive: 3515796 kB' 'Active(anon): 8106636 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635672 kB' 'Mapped: 214668 kB' 'Shmem: 7474116 kB' 'KReclaimable: 312604 kB' 'Slab: 1124064 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 811460 kB' 'KernelStack: 27328 kB' 'PageTables: 9780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9513888 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234876 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.062 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.062 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.063 23:47:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.063 23:47:48 -- setup/common.sh@33 -- # echo 0 00:03:18.063 23:47:48 -- setup/common.sh@33 -- # return 0 00:03:18.063 23:47:48 -- setup/hugepages.sh@97 -- # anon=0 00:03:18.063 23:47:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.063 23:47:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.063 23:47:48 -- setup/common.sh@18 -- # local node= 00:03:18.063 23:47:48 -- setup/common.sh@19 -- # local var val 00:03:18.063 23:47:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.063 23:47:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.063 23:47:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.063 23:47:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.063 23:47:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.063 23:47:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.063 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108113220 kB' 'MemAvailable: 111651384 kB' 'Buffers: 4124 kB' 'Cached: 11676184 kB' 'SwapCached: 0 kB' 'Active: 8797204 kB' 'Inactive: 3515796 kB' 'Active(anon): 8106808 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635968 kB' 'Mapped: 215180 kB' 'Shmem: 7474116 kB' 'KReclaimable: 312604 kB' 'Slab: 1124044 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 811440 kB' 'KernelStack: 27168 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9513888 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234764 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.064 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.064 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 23:47:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.065 23:47:48 -- setup/common.sh@33 -- # echo 0 00:03:18.065 23:47:48 -- setup/common.sh@33 -- # return 0 00:03:18.329 23:47:48 -- setup/hugepages.sh@99 -- # surp=0 00:03:18.329 23:47:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.329 23:47:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.329 23:47:48 -- setup/common.sh@18 -- # local node= 00:03:18.329 23:47:48 -- setup/common.sh@19 -- # local var val 00:03:18.329 23:47:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.329 23:47:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.329 23:47:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.329 23:47:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.329 23:47:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.329 23:47:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108114176 kB' 'MemAvailable: 111652340 kB' 'Buffers: 4124 kB' 'Cached: 11676200 kB' 'SwapCached: 0 kB' 'Active: 8801164 kB' 'Inactive: 3515796 kB' 'Active(anon): 8110768 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639980 kB' 'Mapped: 215172 kB' 'Shmem: 7474132 kB' 'KReclaimable: 312604 kB' 'Slab: 1124044 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 811440 kB' 'KernelStack: 27216 kB' 'PageTables: 9400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9520036 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234816 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.329 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.329 23:47:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.330 23:47:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.330 23:47:48 -- setup/common.sh@33 -- # echo 0 00:03:18.330 23:47:48 -- setup/common.sh@33 -- # return 0 00:03:18.330 23:47:48 -- setup/hugepages.sh@100 -- # resv=0 00:03:18.330 23:47:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.330 nr_hugepages=1024 00:03:18.330 23:47:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.330 resv_hugepages=0 00:03:18.330 23:47:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.330 surplus_hugepages=0 00:03:18.330 23:47:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.330 anon_hugepages=0 00:03:18.330 23:47:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.330 23:47:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.330 23:47:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.330 23:47:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.330 23:47:48 -- setup/common.sh@18 -- # local node= 00:03:18.330 23:47:48 -- setup/common.sh@19 -- # local var val 00:03:18.330 23:47:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.330 23:47:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.330 23:47:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.330 23:47:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.330 23:47:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.330 23:47:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.330 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108113060 kB' 'MemAvailable: 111651224 kB' 'Buffers: 4124 kB' 'Cached: 11676212 kB' 'SwapCached: 0 kB' 'Active: 8796268 kB' 'Inactive: 3515796 kB' 'Active(anon): 8105872 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635100 kB' 'Mapped: 215016 kB' 'Shmem: 7474144 kB' 'KReclaimable: 312604 kB' 'Slab: 1124012 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 811408 kB' 'KernelStack: 27184 kB' 'PageTables: 9600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9513928 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234828 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.331 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.331 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.332 23:47:48 -- setup/common.sh@33 -- # echo 1024 00:03:18.332 23:47:48 -- setup/common.sh@33 -- # return 0 00:03:18.332 23:47:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.332 23:47:48 -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.332 23:47:48 -- setup/hugepages.sh@27 -- # local node 00:03:18.332 23:47:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.332 23:47:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:18.332 23:47:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.332 23:47:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.332 23:47:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.332 23:47:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.332 23:47:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.332 23:47:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.332 23:47:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.332 23:47:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.332 23:47:48 -- setup/common.sh@18 -- # local node=0 00:03:18.332 23:47:48 -- setup/common.sh@19 -- # local var val 00:03:18.332 23:47:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.332 23:47:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.332 23:47:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.332 23:47:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.332 23:47:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.332 23:47:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58925280 kB' 'MemUsed: 6733728 kB' 'SwapCached: 0 kB' 'Active: 2492936 kB' 'Inactive: 106900 kB' 'Active(anon): 2183416 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508512 kB' 'Mapped: 88600 kB' 'AnonPages: 94568 kB' 'Shmem: 2092092 kB' 'KernelStack: 12312 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163284 kB' 'Slab: 547952 kB' 'SReclaimable: 163284 kB' 'SUnreclaim: 384668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.332 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.332 23:47:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # continue 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.333 23:47:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.333 23:47:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.333 23:47:48 -- setup/common.sh@33 -- # echo 0 00:03:18.333 23:47:48 -- setup/common.sh@33 -- # return 0 00:03:18.333 23:47:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.333 23:47:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.333 23:47:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.333 23:47:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.333 23:47:48 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:18.333 node0=1024 expecting 1024 00:03:18.333 23:47:48 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:18.333 00:03:18.333 real 0m3.998s 00:03:18.333 user 0m1.510s 00:03:18.333 sys 0m2.486s 00:03:18.333 23:47:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:18.333 23:47:48 -- common/autotest_common.sh@10 -- # set +x 00:03:18.333 ************************************ 00:03:18.333 END TEST default_setup 00:03:18.333 ************************************ 00:03:18.333 23:47:48 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:18.333 23:47:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:18.333 23:47:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:18.333 23:47:48 -- common/autotest_common.sh@10 -- # set +x 00:03:18.594 ************************************ 00:03:18.594 START TEST per_node_1G_alloc 00:03:18.594 ************************************ 00:03:18.594 23:47:48 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:18.594 23:47:48 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:18.594 23:47:48 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:18.594 23:47:48 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:18.594 23:47:48 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:18.594 23:47:48 -- setup/hugepages.sh@51 -- # shift 00:03:18.594 23:47:48 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:18.594 23:47:48 -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.594 23:47:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.594 23:47:48 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:18.594 23:47:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:18.594 23:47:48 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:18.594 23:47:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.594 23:47:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:18.594 23:47:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.594 23:47:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.594 23:47:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.594 23:47:48 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:18.594 23:47:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.594 23:47:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:18.594 23:47:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.594 23:47:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:18.594 23:47:48 -- setup/hugepages.sh@73 -- # return 0 00:03:18.594 23:47:48 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:18.594 23:47:48 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:18.594 23:47:48 -- setup/hugepages.sh@146 -- # setup output 00:03:18.594 23:47:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.594 23:47:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.905 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:21.905 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:21.905 23:47:51 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:21.905 23:47:51 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:21.905 23:47:51 -- setup/hugepages.sh@89 -- # local node 00:03:21.905 23:47:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.905 23:47:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.905 23:47:51 -- setup/hugepages.sh@92 -- # local surp 00:03:21.905 23:47:51 -- setup/hugepages.sh@93 -- # local resv 00:03:21.905 23:47:51 -- setup/hugepages.sh@94 -- # local anon 00:03:21.905 23:47:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.905 23:47:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.905 23:47:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.905 23:47:51 -- setup/common.sh@18 -- # local node= 00:03:21.905 23:47:51 -- setup/common.sh@19 -- # local var val 00:03:21.905 23:47:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.905 23:47:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.905 23:47:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.905 23:47:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.905 23:47:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.905 23:47:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.905 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.905 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.905 23:47:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108115788 kB' 'MemAvailable: 111653952 kB' 'Buffers: 4124 kB' 'Cached: 11676312 kB' 'SwapCached: 0 kB' 'Active: 8796204 kB' 'Inactive: 3515796 kB' 'Active(anon): 8105808 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634756 kB' 'Mapped: 213640 kB' 'Shmem: 7474244 kB' 'KReclaimable: 312604 kB' 'Slab: 1125168 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812564 kB' 'KernelStack: 27072 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9504444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234780 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:51 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.906 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.906 23:47:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.906 23:47:52 -- setup/common.sh@33 -- # echo 0 00:03:21.906 23:47:52 -- setup/common.sh@33 -- # return 0 00:03:21.906 23:47:52 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.906 23:47:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.907 23:47:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.907 23:47:52 -- setup/common.sh@18 -- # local node= 00:03:21.907 23:47:52 -- setup/common.sh@19 -- # local var val 00:03:21.907 23:47:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.907 23:47:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.907 23:47:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.907 23:47:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.907 23:47:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.907 23:47:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108116420 kB' 'MemAvailable: 111654584 kB' 'Buffers: 4124 kB' 'Cached: 11676312 kB' 'SwapCached: 0 kB' 'Active: 8796100 kB' 'Inactive: 3515796 kB' 'Active(anon): 8105704 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634336 kB' 'Mapped: 213624 kB' 'Shmem: 7474244 kB' 'KReclaimable: 312604 kB' 'Slab: 1125168 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812564 kB' 'KernelStack: 27040 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9504456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.907 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.907 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.908 23:47:52 -- setup/common.sh@33 -- # echo 0 00:03:21.908 23:47:52 -- setup/common.sh@33 -- # return 0 00:03:21.908 23:47:52 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.908 23:47:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.908 23:47:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.908 23:47:52 -- setup/common.sh@18 -- # local node= 00:03:21.908 23:47:52 -- setup/common.sh@19 -- # local var val 00:03:21.908 23:47:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.908 23:47:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.908 23:47:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.908 23:47:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.908 23:47:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.908 23:47:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108116924 kB' 'MemAvailable: 111655088 kB' 'Buffers: 4124 kB' 'Cached: 11676328 kB' 'SwapCached: 0 kB' 'Active: 8795456 kB' 'Inactive: 3515796 kB' 'Active(anon): 8105060 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634140 kB' 'Mapped: 213544 kB' 'Shmem: 7474260 kB' 'KReclaimable: 312604 kB' 'Slab: 1125168 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812564 kB' 'KernelStack: 27040 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9504468 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.908 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.908 23:47:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.909 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.909 23:47:52 -- setup/common.sh@33 -- # echo 0 00:03:21.909 23:47:52 -- setup/common.sh@33 -- # return 0 00:03:21.909 23:47:52 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.909 23:47:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.909 nr_hugepages=1024 00:03:21.909 23:47:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.909 resv_hugepages=0 00:03:21.909 23:47:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.909 surplus_hugepages=0 00:03:21.909 23:47:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.909 anon_hugepages=0 00:03:21.909 23:47:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.909 23:47:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.909 23:47:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.909 23:47:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.909 23:47:52 -- setup/common.sh@18 -- # local node= 00:03:21.909 23:47:52 -- setup/common.sh@19 -- # local var val 00:03:21.909 23:47:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.909 23:47:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.909 23:47:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.909 23:47:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.909 23:47:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.909 23:47:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.909 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108117512 kB' 'MemAvailable: 111655676 kB' 'Buffers: 4124 kB' 'Cached: 11676356 kB' 'SwapCached: 0 kB' 'Active: 8795116 kB' 'Inactive: 3515796 kB' 'Active(anon): 8104720 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633740 kB' 'Mapped: 213544 kB' 'Shmem: 7474288 kB' 'KReclaimable: 312604 kB' 'Slab: 1125168 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812564 kB' 'KernelStack: 27024 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9504484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.910 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.910 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # continue 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.911 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.911 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.911 23:47:52 -- setup/common.sh@33 -- # echo 1024 00:03:21.911 23:47:52 -- setup/common.sh@33 -- # return 0 00:03:21.911 23:47:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.911 23:47:52 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.911 23:47:52 -- setup/hugepages.sh@27 -- # local node 00:03:21.911 23:47:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.911 23:47:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.911 23:47:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.911 23:47:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.911 23:47:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.911 23:47:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.911 23:47:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.911 23:47:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.911 23:47:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.911 23:47:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.911 23:47:52 -- setup/common.sh@18 -- # local node=0 00:03:21.911 23:47:52 -- setup/common.sh@19 -- # local var val 00:03:21.911 23:47:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.911 23:47:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.911 23:47:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.911 23:47:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.911 23:47:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.911 23:47:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59981268 kB' 'MemUsed: 5677740 kB' 'SwapCached: 0 kB' 'Active: 2490104 kB' 'Inactive: 106900 kB' 'Active(anon): 2180584 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508596 kB' 'Mapped: 87484 kB' 'AnonPages: 91664 kB' 'Shmem: 2092176 kB' 'KernelStack: 12328 kB' 'PageTables: 3432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163284 kB' 'Slab: 548912 kB' 'SReclaimable: 163284 kB' 'SUnreclaim: 385628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.175 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.175 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@33 -- # echo 0 00:03:22.176 23:47:52 -- setup/common.sh@33 -- # return 0 00:03:22.176 23:47:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.176 23:47:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.176 23:47:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.176 23:47:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.176 23:47:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.176 23:47:52 -- setup/common.sh@18 -- # local node=1 00:03:22.176 23:47:52 -- setup/common.sh@19 -- # local var val 00:03:22.176 23:47:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.176 23:47:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.176 23:47:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.176 23:47:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.176 23:47:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.176 23:47:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 48136732 kB' 'MemUsed: 12543128 kB' 'SwapCached: 0 kB' 'Active: 6305392 kB' 'Inactive: 3408896 kB' 'Active(anon): 5924516 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3408896 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9171884 kB' 'Mapped: 126060 kB' 'AnonPages: 542480 kB' 'Shmem: 5382112 kB' 'KernelStack: 14712 kB' 'PageTables: 5540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149320 kB' 'Slab: 576256 kB' 'SReclaimable: 149320 kB' 'SUnreclaim: 426936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.176 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.176 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # continue 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.177 23:47:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.177 23:47:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.177 23:47:52 -- setup/common.sh@33 -- # echo 0 00:03:22.177 23:47:52 -- setup/common.sh@33 -- # return 0 00:03:22.177 23:47:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.177 23:47:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.177 23:47:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.177 23:47:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.177 23:47:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.177 node0=512 expecting 512 00:03:22.177 23:47:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.177 23:47:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.177 23:47:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.177 23:47:52 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.177 node1=512 expecting 512 00:03:22.177 23:47:52 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.177 00:03:22.177 real 0m3.575s 00:03:22.177 user 0m1.349s 00:03:22.177 sys 0m2.207s 00:03:22.177 23:47:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.177 23:47:52 -- common/autotest_common.sh@10 -- # set +x 00:03:22.177 ************************************ 00:03:22.177 END TEST per_node_1G_alloc 00:03:22.177 ************************************ 00:03:22.177 23:47:52 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:22.177 23:47:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.177 23:47:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.177 23:47:52 -- common/autotest_common.sh@10 -- # set +x 00:03:22.177 ************************************ 00:03:22.177 START TEST even_2G_alloc 00:03:22.177 ************************************ 00:03:22.177 23:47:52 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:22.177 23:47:52 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:22.177 23:47:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.177 23:47:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.177 23:47:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.177 23:47:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.177 23:47:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.177 23:47:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.177 23:47:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.177 23:47:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.177 23:47:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.177 23:47:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.177 23:47:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.177 23:47:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.177 23:47:52 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.177 23:47:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.177 23:47:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.177 23:47:52 -- setup/hugepages.sh@83 -- # : 512 00:03:22.177 23:47:52 -- setup/hugepages.sh@84 -- # : 1 00:03:22.177 23:47:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.177 23:47:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.177 23:47:52 -- setup/hugepages.sh@83 -- # : 0 00:03:22.177 23:47:52 -- setup/hugepages.sh@84 -- # : 0 00:03:22.177 23:47:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.177 23:47:52 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:22.177 23:47:52 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:22.177 23:47:52 -- setup/hugepages.sh@153 -- # setup output 00:03:22.177 23:47:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.177 23:47:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.484 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:25.484 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:25.484 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:25.484 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:25.484 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:25.484 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:25.484 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:25.485 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:25.485 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:25.485 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:25.485 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:25.485 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:25.485 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:25.485 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:25.485 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:25.485 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:25.485 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:25.745 23:47:55 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:25.746 23:47:55 -- setup/hugepages.sh@89 -- # local node 00:03:25.746 23:47:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.746 23:47:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.746 23:47:55 -- setup/hugepages.sh@92 -- # local surp 00:03:25.746 23:47:55 -- setup/hugepages.sh@93 -- # local resv 00:03:25.746 23:47:55 -- setup/hugepages.sh@94 -- # local anon 00:03:25.746 23:47:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.746 23:47:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.746 23:47:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.746 23:47:55 -- setup/common.sh@18 -- # local node= 00:03:25.746 23:47:55 -- setup/common.sh@19 -- # local var val 00:03:25.746 23:47:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.746 23:47:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.746 23:47:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.746 23:47:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.746 23:47:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.746 23:47:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108126500 kB' 'MemAvailable: 111664664 kB' 'Buffers: 4124 kB' 'Cached: 11676460 kB' 'SwapCached: 0 kB' 'Active: 8796644 kB' 'Inactive: 3515796 kB' 'Active(anon): 8106248 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634724 kB' 'Mapped: 213676 kB' 'Shmem: 7474392 kB' 'KReclaimable: 312604 kB' 'Slab: 1124476 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 811872 kB' 'KernelStack: 27056 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9505192 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.746 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.013 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.013 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.014 23:47:55 -- setup/common.sh@33 -- # echo 0 00:03:26.014 23:47:55 -- setup/common.sh@33 -- # return 0 00:03:26.014 23:47:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.014 23:47:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.014 23:47:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.014 23:47:55 -- setup/common.sh@18 -- # local node= 00:03:26.014 23:47:55 -- setup/common.sh@19 -- # local var val 00:03:26.014 23:47:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.014 23:47:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.014 23:47:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.014 23:47:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.014 23:47:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.014 23:47:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108128908 kB' 'MemAvailable: 111667072 kB' 'Buffers: 4124 kB' 'Cached: 11676460 kB' 'SwapCached: 0 kB' 'Active: 8796548 kB' 'Inactive: 3515796 kB' 'Active(anon): 8106152 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635136 kB' 'Mapped: 213580 kB' 'Shmem: 7474392 kB' 'KReclaimable: 312604 kB' 'Slab: 1124432 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 811828 kB' 'KernelStack: 27040 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9505204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234796 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.014 23:47:56 -- setup/common.sh@33 -- # echo 0 00:03:26.014 23:47:56 -- setup/common.sh@33 -- # return 0 00:03:26.014 23:47:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.014 23:47:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.014 23:47:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.014 23:47:56 -- setup/common.sh@18 -- # local node= 00:03:26.014 23:47:56 -- setup/common.sh@19 -- # local var val 00:03:26.014 23:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.014 23:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.014 23:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.014 23:47:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.014 23:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.014 23:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108129560 kB' 'MemAvailable: 111667724 kB' 'Buffers: 4124 kB' 'Cached: 11676472 kB' 'SwapCached: 0 kB' 'Active: 8796280 kB' 'Inactive: 3515796 kB' 'Active(anon): 8105884 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634796 kB' 'Mapped: 213580 kB' 'Shmem: 7474404 kB' 'KReclaimable: 312604 kB' 'Slab: 1124432 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 811828 kB' 'KernelStack: 27040 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9505216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234796 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.014 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.014 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.015 23:47:56 -- setup/common.sh@33 -- # echo 0 00:03:26.015 23:47:56 -- setup/common.sh@33 -- # return 0 00:03:26.015 23:47:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.015 23:47:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.015 nr_hugepages=1024 00:03:26.015 23:47:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.015 resv_hugepages=0 00:03:26.015 23:47:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.015 surplus_hugepages=0 00:03:26.015 23:47:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.015 anon_hugepages=0 00:03:26.015 23:47:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.015 23:47:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.015 23:47:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.015 23:47:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.015 23:47:56 -- setup/common.sh@18 -- # local node= 00:03:26.015 23:47:56 -- setup/common.sh@19 -- # local var val 00:03:26.015 23:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.015 23:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.015 23:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.015 23:47:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.015 23:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.015 23:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108130144 kB' 'MemAvailable: 111668308 kB' 'Buffers: 4124 kB' 'Cached: 11676500 kB' 'SwapCached: 0 kB' 'Active: 8795892 kB' 'Inactive: 3515796 kB' 'Active(anon): 8105496 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634380 kB' 'Mapped: 213580 kB' 'Shmem: 7474432 kB' 'KReclaimable: 312604 kB' 'Slab: 1124432 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 811828 kB' 'KernelStack: 27024 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9505232 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234796 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.015 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.015 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.016 23:47:56 -- setup/common.sh@33 -- # echo 1024 00:03:26.016 23:47:56 -- setup/common.sh@33 -- # return 0 00:03:26.016 23:47:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.016 23:47:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.016 23:47:56 -- setup/hugepages.sh@27 -- # local node 00:03:26.016 23:47:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.016 23:47:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.016 23:47:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.016 23:47:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.016 23:47:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.016 23:47:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.016 23:47:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.016 23:47:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.016 23:47:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.016 23:47:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.016 23:47:56 -- setup/common.sh@18 -- # local node=0 00:03:26.016 23:47:56 -- setup/common.sh@19 -- # local var val 00:03:26.016 23:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.016 23:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.016 23:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.016 23:47:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.016 23:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.016 23:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59991148 kB' 'MemUsed: 5667860 kB' 'SwapCached: 0 kB' 'Active: 2491452 kB' 'Inactive: 106900 kB' 'Active(anon): 2181932 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508716 kB' 'Mapped: 87516 kB' 'AnonPages: 92852 kB' 'Shmem: 2092296 kB' 'KernelStack: 12296 kB' 'PageTables: 3296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163284 kB' 'Slab: 548328 kB' 'SReclaimable: 163284 kB' 'SUnreclaim: 385044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.016 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.016 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@33 -- # echo 0 00:03:26.017 23:47:56 -- setup/common.sh@33 -- # return 0 00:03:26.017 23:47:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.017 23:47:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.017 23:47:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.017 23:47:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.017 23:47:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.017 23:47:56 -- setup/common.sh@18 -- # local node=1 00:03:26.017 23:47:56 -- setup/common.sh@19 -- # local var val 00:03:26.017 23:47:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.017 23:47:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.017 23:47:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.017 23:47:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.017 23:47:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.017 23:47:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 48139416 kB' 'MemUsed: 12540444 kB' 'SwapCached: 0 kB' 'Active: 6304456 kB' 'Inactive: 3408896 kB' 'Active(anon): 5923580 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3408896 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9171920 kB' 'Mapped: 126064 kB' 'AnonPages: 541528 kB' 'Shmem: 5382148 kB' 'KernelStack: 14728 kB' 'PageTables: 5632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149320 kB' 'Slab: 576104 kB' 'SReclaimable: 149320 kB' 'SUnreclaim: 426784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # continue 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.017 23:47:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.017 23:47:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.017 23:47:56 -- setup/common.sh@33 -- # echo 0 00:03:26.017 23:47:56 -- setup/common.sh@33 -- # return 0 00:03:26.017 23:47:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.017 23:47:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.017 23:47:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.017 23:47:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.017 23:47:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.017 node0=512 expecting 512 00:03:26.017 23:47:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.017 23:47:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.017 23:47:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.017 23:47:56 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:26.017 node1=512 expecting 512 00:03:26.017 23:47:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:26.017 00:03:26.017 real 0m3.786s 00:03:26.017 user 0m1.476s 00:03:26.017 sys 0m2.359s 00:03:26.017 23:47:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:26.017 23:47:56 -- common/autotest_common.sh@10 -- # set +x 00:03:26.017 ************************************ 00:03:26.017 END TEST even_2G_alloc 00:03:26.017 ************************************ 00:03:26.017 23:47:56 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:26.017 23:47:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.017 23:47:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.017 23:47:56 -- common/autotest_common.sh@10 -- # set +x 00:03:26.279 ************************************ 00:03:26.279 START TEST odd_alloc 00:03:26.279 ************************************ 00:03:26.279 23:47:56 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:26.279 23:47:56 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:26.279 23:47:56 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:26.279 23:47:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.279 23:47:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.279 23:47:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:26.279 23:47:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.279 23:47:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.279 23:47:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.279 23:47:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:26.279 23:47:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.279 23:47:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.279 23:47:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.279 23:47:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.279 23:47:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.279 23:47:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.279 23:47:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.279 23:47:56 -- setup/hugepages.sh@83 -- # : 513 00:03:26.279 23:47:56 -- setup/hugepages.sh@84 -- # : 1 00:03:26.279 23:47:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.279 23:47:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:26.279 23:47:56 -- setup/hugepages.sh@83 -- # : 0 00:03:26.279 23:47:56 -- setup/hugepages.sh@84 -- # : 0 00:03:26.279 23:47:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.279 23:47:56 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:26.279 23:47:56 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:26.279 23:47:56 -- setup/hugepages.sh@160 -- # setup output 00:03:26.279 23:47:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.279 23:47:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.693 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:29.693 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:29.693 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:29.959 23:48:00 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:29.959 23:48:00 -- setup/hugepages.sh@89 -- # local node 00:03:29.959 23:48:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.959 23:48:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.959 23:48:00 -- setup/hugepages.sh@92 -- # local surp 00:03:29.959 23:48:00 -- setup/hugepages.sh@93 -- # local resv 00:03:29.959 23:48:00 -- setup/hugepages.sh@94 -- # local anon 00:03:29.959 23:48:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.959 23:48:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.959 23:48:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.959 23:48:00 -- setup/common.sh@18 -- # local node= 00:03:29.959 23:48:00 -- setup/common.sh@19 -- # local var val 00:03:29.959 23:48:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.959 23:48:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.959 23:48:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.959 23:48:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.959 23:48:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.959 23:48:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 23:48:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108136036 kB' 'MemAvailable: 111674200 kB' 'Buffers: 4124 kB' 'Cached: 11676616 kB' 'SwapCached: 0 kB' 'Active: 8798368 kB' 'Inactive: 3515796 kB' 'Active(anon): 8107972 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636972 kB' 'Mapped: 213684 kB' 'Shmem: 7474548 kB' 'KReclaimable: 312604 kB' 'Slab: 1125056 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812452 kB' 'KernelStack: 27168 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 9510124 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234892 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:29.959 23:48:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 23:48:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 23:48:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 23:48:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.959 23:48:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.959 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.959 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.960 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.960 23:48:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.961 23:48:00 -- setup/common.sh@33 -- # echo 0 00:03:29.961 23:48:00 -- setup/common.sh@33 -- # return 0 00:03:29.961 23:48:00 -- setup/hugepages.sh@97 -- # anon=0 00:03:29.961 23:48:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.961 23:48:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.961 23:48:00 -- setup/common.sh@18 -- # local node= 00:03:29.961 23:48:00 -- setup/common.sh@19 -- # local var val 00:03:29.961 23:48:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.961 23:48:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.961 23:48:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.961 23:48:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.961 23:48:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.961 23:48:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108134648 kB' 'MemAvailable: 111672812 kB' 'Buffers: 4124 kB' 'Cached: 11676620 kB' 'SwapCached: 0 kB' 'Active: 8801196 kB' 'Inactive: 3515796 kB' 'Active(anon): 8110800 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640052 kB' 'Mapped: 213628 kB' 'Shmem: 7474552 kB' 'KReclaimable: 312604 kB' 'Slab: 1124808 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812204 kB' 'KernelStack: 27184 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 9511424 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.961 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.961 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.962 23:48:00 -- setup/common.sh@33 -- # echo 0 00:03:29.962 23:48:00 -- setup/common.sh@33 -- # return 0 00:03:29.962 23:48:00 -- setup/hugepages.sh@99 -- # surp=0 00:03:29.962 23:48:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.962 23:48:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.962 23:48:00 -- setup/common.sh@18 -- # local node= 00:03:29.962 23:48:00 -- setup/common.sh@19 -- # local var val 00:03:29.962 23:48:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.962 23:48:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.962 23:48:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.962 23:48:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.962 23:48:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.962 23:48:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.962 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.962 23:48:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108129664 kB' 'MemAvailable: 111667828 kB' 'Buffers: 4124 kB' 'Cached: 11676632 kB' 'SwapCached: 0 kB' 'Active: 8804872 kB' 'Inactive: 3515796 kB' 'Active(anon): 8114476 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643764 kB' 'Mapped: 214132 kB' 'Shmem: 7474564 kB' 'KReclaimable: 312604 kB' 'Slab: 1124808 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812204 kB' 'KernelStack: 27328 kB' 'PageTables: 10016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 9515312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234908 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:29.962 23:48:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.963 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.963 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.964 23:48:00 -- setup/common.sh@33 -- # echo 0 00:03:29.964 23:48:00 -- setup/common.sh@33 -- # return 0 00:03:29.964 23:48:00 -- setup/hugepages.sh@100 -- # resv=0 00:03:29.964 23:48:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:29.964 nr_hugepages=1025 00:03:29.964 23:48:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.964 resv_hugepages=0 00:03:29.964 23:48:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.964 surplus_hugepages=0 00:03:29.964 23:48:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.964 anon_hugepages=0 00:03:29.964 23:48:00 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:29.964 23:48:00 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:29.964 23:48:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.964 23:48:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.964 23:48:00 -- setup/common.sh@18 -- # local node= 00:03:29.964 23:48:00 -- setup/common.sh@19 -- # local var val 00:03:29.964 23:48:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.964 23:48:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.964 23:48:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.964 23:48:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.964 23:48:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.964 23:48:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108125736 kB' 'MemAvailable: 111663900 kB' 'Buffers: 4124 kB' 'Cached: 11676632 kB' 'SwapCached: 0 kB' 'Active: 8808836 kB' 'Inactive: 3515796 kB' 'Active(anon): 8118440 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647412 kB' 'Mapped: 214544 kB' 'Shmem: 7474564 kB' 'KReclaimable: 312604 kB' 'Slab: 1124808 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812204 kB' 'KernelStack: 27312 kB' 'PageTables: 9732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 9519988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234864 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.964 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.964 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.965 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.965 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.966 23:48:00 -- setup/common.sh@33 -- # echo 1025 00:03:29.966 23:48:00 -- setup/common.sh@33 -- # return 0 00:03:29.966 23:48:00 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:29.966 23:48:00 -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.966 23:48:00 -- setup/hugepages.sh@27 -- # local node 00:03:29.966 23:48:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.966 23:48:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.966 23:48:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.966 23:48:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:29.966 23:48:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.966 23:48:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.966 23:48:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.966 23:48:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.966 23:48:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.966 23:48:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.966 23:48:00 -- setup/common.sh@18 -- # local node=0 00:03:29.966 23:48:00 -- setup/common.sh@19 -- # local var val 00:03:29.966 23:48:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.966 23:48:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.966 23:48:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.966 23:48:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.966 23:48:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.966 23:48:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59981348 kB' 'MemUsed: 5677660 kB' 'SwapCached: 0 kB' 'Active: 2501916 kB' 'Inactive: 106900 kB' 'Active(anon): 2192396 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508780 kB' 'Mapped: 88316 kB' 'AnonPages: 103620 kB' 'Shmem: 2092360 kB' 'KernelStack: 12424 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163284 kB' 'Slab: 548532 kB' 'SReclaimable: 163284 kB' 'SUnreclaim: 385248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.966 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.966 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.967 23:48:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.967 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.967 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.967 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.967 23:48:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.967 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.967 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.967 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.967 23:48:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.967 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.967 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.967 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.967 23:48:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.967 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.967 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.967 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.967 23:48:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.967 23:48:00 -- setup/common.sh@32 -- # continue 00:03:29.967 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.967 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.967 23:48:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.230 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 23:48:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.230 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 23:48:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.230 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.230 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.230 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.230 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.230 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@33 -- # echo 0 00:03:30.231 23:48:00 -- setup/common.sh@33 -- # return 0 00:03:30.231 23:48:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.231 23:48:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.231 23:48:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.231 23:48:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.231 23:48:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.231 23:48:00 -- setup/common.sh@18 -- # local node=1 00:03:30.231 23:48:00 -- setup/common.sh@19 -- # local var val 00:03:30.231 23:48:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.231 23:48:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.231 23:48:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.231 23:48:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.231 23:48:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.231 23:48:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 48144304 kB' 'MemUsed: 12535556 kB' 'SwapCached: 0 kB' 'Active: 6305952 kB' 'Inactive: 3408896 kB' 'Active(anon): 5925076 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3408896 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9172004 kB' 'Mapped: 126228 kB' 'AnonPages: 542992 kB' 'Shmem: 5382232 kB' 'KernelStack: 14840 kB' 'PageTables: 5568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149320 kB' 'Slab: 576244 kB' 'SReclaimable: 149320 kB' 'SUnreclaim: 426924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.231 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.231 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 23:48:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.232 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.232 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 23:48:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.232 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.232 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.232 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.232 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.232 23:48:00 -- setup/common.sh@32 -- # continue 00:03:30.232 23:48:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 23:48:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.232 23:48:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.232 23:48:00 -- setup/common.sh@33 -- # echo 0 00:03:30.232 23:48:00 -- setup/common.sh@33 -- # return 0 00:03:30.232 23:48:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.232 23:48:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.232 23:48:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.232 23:48:00 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:30.232 node0=512 expecting 513 00:03:30.232 23:48:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.232 23:48:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.232 23:48:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.232 23:48:00 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:30.232 node1=513 expecting 512 00:03:30.232 23:48:00 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:30.232 00:03:30.232 real 0m3.859s 00:03:30.232 user 0m1.591s 00:03:30.232 sys 0m2.322s 00:03:30.232 23:48:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:30.232 23:48:00 -- common/autotest_common.sh@10 -- # set +x 00:03:30.232 ************************************ 00:03:30.232 END TEST odd_alloc 00:03:30.232 ************************************ 00:03:30.232 23:48:00 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:30.232 23:48:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.232 23:48:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.232 23:48:00 -- common/autotest_common.sh@10 -- # set +x 00:03:30.232 ************************************ 00:03:30.232 START TEST custom_alloc 00:03:30.232 ************************************ 00:03:30.232 23:48:00 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:30.232 23:48:00 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:30.232 23:48:00 -- setup/hugepages.sh@169 -- # local node 00:03:30.232 23:48:00 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:30.232 23:48:00 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:30.232 23:48:00 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:30.232 23:48:00 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:30.232 23:48:00 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:30.232 23:48:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:30.232 23:48:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.232 23:48:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.232 23:48:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.232 23:48:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:30.232 23:48:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.232 23:48:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.232 23:48:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.232 23:48:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.232 23:48:00 -- setup/hugepages.sh@83 -- # : 256 00:03:30.232 23:48:00 -- setup/hugepages.sh@84 -- # : 1 00:03:30.232 23:48:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.232 23:48:00 -- setup/hugepages.sh@83 -- # : 0 00:03:30.232 23:48:00 -- setup/hugepages.sh@84 -- # : 0 00:03:30.232 23:48:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:30.232 23:48:00 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:30.232 23:48:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.232 23:48:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.232 23:48:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.232 23:48:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.232 23:48:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.232 23:48:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.232 23:48:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.232 23:48:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.232 23:48:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.232 23:48:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.232 23:48:00 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.232 23:48:00 -- setup/hugepages.sh@78 -- # return 0 00:03:30.232 23:48:00 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:30.232 23:48:00 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.232 23:48:00 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.232 23:48:00 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.232 23:48:00 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.232 23:48:00 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:30.232 23:48:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.232 23:48:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.232 23:48:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.232 23:48:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.232 23:48:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.232 23:48:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.232 23:48:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:30.232 23:48:00 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.232 23:48:00 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.232 23:48:00 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.232 23:48:00 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:30.232 23:48:00 -- setup/hugepages.sh@78 -- # return 0 00:03:30.232 23:48:00 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:30.232 23:48:00 -- setup/hugepages.sh@187 -- # setup output 00:03:30.232 23:48:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.232 23:48:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.537 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:33.797 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:33.797 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:33.797 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:33.797 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:33.797 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:33.797 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:33.798 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:33.798 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:33.798 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:33.798 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:33.798 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:33.798 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:33.798 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:33.798 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:33.798 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:33.798 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.062 23:48:04 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:34.062 23:48:04 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:34.062 23:48:04 -- setup/hugepages.sh@89 -- # local node 00:03:34.062 23:48:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.062 23:48:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.062 23:48:04 -- setup/hugepages.sh@92 -- # local surp 00:03:34.062 23:48:04 -- setup/hugepages.sh@93 -- # local resv 00:03:34.062 23:48:04 -- setup/hugepages.sh@94 -- # local anon 00:03:34.062 23:48:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.062 23:48:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.062 23:48:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.062 23:48:04 -- setup/common.sh@18 -- # local node= 00:03:34.062 23:48:04 -- setup/common.sh@19 -- # local var val 00:03:34.062 23:48:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.062 23:48:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.062 23:48:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.062 23:48:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.062 23:48:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.062 23:48:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 107009512 kB' 'MemAvailable: 110547676 kB' 'Buffers: 4124 kB' 'Cached: 11676760 kB' 'SwapCached: 0 kB' 'Active: 8804424 kB' 'Inactive: 3515796 kB' 'Active(anon): 8114028 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642700 kB' 'Mapped: 214592 kB' 'Shmem: 7474692 kB' 'KReclaimable: 312604 kB' 'Slab: 1125180 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812576 kB' 'KernelStack: 27120 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 9516064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234928 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.062 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.062 23:48:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.063 23:48:04 -- setup/common.sh@33 -- # echo 0 00:03:34.063 23:48:04 -- setup/common.sh@33 -- # return 0 00:03:34.063 23:48:04 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.063 23:48:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.063 23:48:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.063 23:48:04 -- setup/common.sh@18 -- # local node= 00:03:34.063 23:48:04 -- setup/common.sh@19 -- # local var val 00:03:34.063 23:48:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.063 23:48:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.063 23:48:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.063 23:48:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.063 23:48:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.063 23:48:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 107010472 kB' 'MemAvailable: 110548636 kB' 'Buffers: 4124 kB' 'Cached: 11676760 kB' 'SwapCached: 0 kB' 'Active: 8804516 kB' 'Inactive: 3515796 kB' 'Active(anon): 8114120 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642864 kB' 'Mapped: 214580 kB' 'Shmem: 7474692 kB' 'KReclaimable: 312604 kB' 'Slab: 1125244 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812640 kB' 'KernelStack: 27120 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 9516076 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234880 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.063 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.063 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.064 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.064 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.065 23:48:04 -- setup/common.sh@33 -- # echo 0 00:03:34.065 23:48:04 -- setup/common.sh@33 -- # return 0 00:03:34.065 23:48:04 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.065 23:48:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.065 23:48:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.065 23:48:04 -- setup/common.sh@18 -- # local node= 00:03:34.065 23:48:04 -- setup/common.sh@19 -- # local var val 00:03:34.065 23:48:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.065 23:48:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.065 23:48:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.065 23:48:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.065 23:48:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.065 23:48:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 107010920 kB' 'MemAvailable: 110549084 kB' 'Buffers: 4124 kB' 'Cached: 11676772 kB' 'SwapCached: 0 kB' 'Active: 8804532 kB' 'Inactive: 3515796 kB' 'Active(anon): 8114136 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642864 kB' 'Mapped: 214580 kB' 'Shmem: 7474704 kB' 'KReclaimable: 312604 kB' 'Slab: 1125244 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812640 kB' 'KernelStack: 27120 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 9516092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234880 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.065 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.065 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.066 23:48:04 -- setup/common.sh@33 -- # echo 0 00:03:34.066 23:48:04 -- setup/common.sh@33 -- # return 0 00:03:34.066 23:48:04 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.066 23:48:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:34.066 nr_hugepages=1536 00:03:34.066 23:48:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.066 resv_hugepages=0 00:03:34.066 23:48:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.066 surplus_hugepages=0 00:03:34.066 23:48:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.066 anon_hugepages=0 00:03:34.066 23:48:04 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:34.066 23:48:04 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:34.066 23:48:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.066 23:48:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.066 23:48:04 -- setup/common.sh@18 -- # local node= 00:03:34.066 23:48:04 -- setup/common.sh@19 -- # local var val 00:03:34.066 23:48:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.066 23:48:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.066 23:48:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.066 23:48:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.066 23:48:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.066 23:48:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 107011676 kB' 'MemAvailable: 110549840 kB' 'Buffers: 4124 kB' 'Cached: 11676772 kB' 'SwapCached: 0 kB' 'Active: 8804532 kB' 'Inactive: 3515796 kB' 'Active(anon): 8114136 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642864 kB' 'Mapped: 214580 kB' 'Shmem: 7474704 kB' 'KReclaimable: 312604 kB' 'Slab: 1125244 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812640 kB' 'KernelStack: 27120 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 9516104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234880 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.066 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.066 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.067 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.067 23:48:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.068 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.068 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.068 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.068 23:48:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.068 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.068 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.068 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.068 23:48:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.068 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.068 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.068 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.068 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.068 23:48:04 -- setup/common.sh@33 -- # echo 1536 00:03:34.068 23:48:04 -- setup/common.sh@33 -- # return 0 00:03:34.068 23:48:04 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:34.068 23:48:04 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.068 23:48:04 -- setup/hugepages.sh@27 -- # local node 00:03:34.068 23:48:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.068 23:48:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.068 23:48:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.068 23:48:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.068 23:48:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.068 23:48:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.068 23:48:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.068 23:48:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.068 23:48:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.068 23:48:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.068 23:48:04 -- setup/common.sh@18 -- # local node=0 00:03:34.068 23:48:04 -- setup/common.sh@19 -- # local var val 00:03:34.068 23:48:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.068 23:48:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.068 23:48:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.068 23:48:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.068 23:48:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.330 23:48:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59915924 kB' 'MemUsed: 5743084 kB' 'SwapCached: 0 kB' 'Active: 2500028 kB' 'Inactive: 106900 kB' 'Active(anon): 2190508 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508864 kB' 'Mapped: 88356 kB' 'AnonPages: 101384 kB' 'Shmem: 2092444 kB' 'KernelStack: 12392 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163284 kB' 'Slab: 548776 kB' 'SReclaimable: 163284 kB' 'SUnreclaim: 385492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.330 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.330 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@33 -- # echo 0 00:03:34.331 23:48:04 -- setup/common.sh@33 -- # return 0 00:03:34.331 23:48:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.331 23:48:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.331 23:48:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.331 23:48:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.331 23:48:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.331 23:48:04 -- setup/common.sh@18 -- # local node=1 00:03:34.331 23:48:04 -- setup/common.sh@19 -- # local var val 00:03:34.331 23:48:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.331 23:48:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.331 23:48:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.331 23:48:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.331 23:48:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.331 23:48:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 47095584 kB' 'MemUsed: 13584276 kB' 'SwapCached: 0 kB' 'Active: 6304564 kB' 'Inactive: 3408896 kB' 'Active(anon): 5923688 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3408896 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9172076 kB' 'Mapped: 126224 kB' 'AnonPages: 541484 kB' 'Shmem: 5382304 kB' 'KernelStack: 14728 kB' 'PageTables: 5640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149320 kB' 'Slab: 576468 kB' 'SReclaimable: 149320 kB' 'SUnreclaim: 427148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.331 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.331 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # continue 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.332 23:48:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.332 23:48:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.332 23:48:04 -- setup/common.sh@33 -- # echo 0 00:03:34.332 23:48:04 -- setup/common.sh@33 -- # return 0 00:03:34.332 23:48:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.332 23:48:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.332 23:48:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.332 23:48:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.332 23:48:04 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.332 node0=512 expecting 512 00:03:34.332 23:48:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.332 23:48:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.332 23:48:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.332 23:48:04 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:34.332 node1=1024 expecting 1024 00:03:34.332 23:48:04 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:34.332 00:03:34.332 real 0m3.935s 00:03:34.332 user 0m1.520s 00:03:34.332 sys 0m2.478s 00:03:34.332 23:48:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.332 23:48:04 -- common/autotest_common.sh@10 -- # set +x 00:03:34.332 ************************************ 00:03:34.332 END TEST custom_alloc 00:03:34.332 ************************************ 00:03:34.332 23:48:04 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:34.332 23:48:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.332 23:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.332 23:48:04 -- common/autotest_common.sh@10 -- # set +x 00:03:34.332 ************************************ 00:03:34.332 START TEST no_shrink_alloc 00:03:34.332 ************************************ 00:03:34.332 23:48:04 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:34.332 23:48:04 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:34.332 23:48:04 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.332 23:48:04 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:34.332 23:48:04 -- setup/hugepages.sh@51 -- # shift 00:03:34.332 23:48:04 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:34.332 23:48:04 -- setup/hugepages.sh@52 -- # local node_ids 00:03:34.332 23:48:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.332 23:48:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.332 23:48:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:34.332 23:48:04 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:34.332 23:48:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.332 23:48:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.332 23:48:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.332 23:48:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.332 23:48:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.332 23:48:04 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:34.332 23:48:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.332 23:48:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:34.332 23:48:04 -- setup/hugepages.sh@73 -- # return 0 00:03:34.332 23:48:04 -- setup/hugepages.sh@198 -- # setup output 00:03:34.332 23:48:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.332 23:48:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.542 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:38.542 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:38.542 23:48:08 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:38.542 23:48:08 -- setup/hugepages.sh@89 -- # local node 00:03:38.542 23:48:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.542 23:48:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.542 23:48:08 -- setup/hugepages.sh@92 -- # local surp 00:03:38.542 23:48:08 -- setup/hugepages.sh@93 -- # local resv 00:03:38.542 23:48:08 -- setup/hugepages.sh@94 -- # local anon 00:03:38.542 23:48:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.542 23:48:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.542 23:48:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.542 23:48:08 -- setup/common.sh@18 -- # local node= 00:03:38.542 23:48:08 -- setup/common.sh@19 -- # local var val 00:03:38.542 23:48:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.542 23:48:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.542 23:48:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.542 23:48:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.542 23:48:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.542 23:48:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108056988 kB' 'MemAvailable: 111595152 kB' 'Buffers: 4124 kB' 'Cached: 11676900 kB' 'SwapCached: 0 kB' 'Active: 8802444 kB' 'Inactive: 3515796 kB' 'Active(anon): 8112048 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640176 kB' 'Mapped: 213828 kB' 'Shmem: 7474832 kB' 'KReclaimable: 312604 kB' 'Slab: 1125264 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812660 kB' 'KernelStack: 27360 kB' 'PageTables: 9836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9514096 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235004 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.542 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.542 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.543 23:48:08 -- setup/common.sh@33 -- # echo 0 00:03:38.543 23:48:08 -- setup/common.sh@33 -- # return 0 00:03:38.543 23:48:08 -- setup/hugepages.sh@97 -- # anon=0 00:03:38.543 23:48:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.543 23:48:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.543 23:48:08 -- setup/common.sh@18 -- # local node= 00:03:38.543 23:48:08 -- setup/common.sh@19 -- # local var val 00:03:38.543 23:48:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.543 23:48:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.543 23:48:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.543 23:48:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.543 23:48:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.543 23:48:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108056444 kB' 'MemAvailable: 111594608 kB' 'Buffers: 4124 kB' 'Cached: 11676904 kB' 'SwapCached: 0 kB' 'Active: 8801216 kB' 'Inactive: 3515796 kB' 'Active(anon): 8110820 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639464 kB' 'Mapped: 213688 kB' 'Shmem: 7474836 kB' 'KReclaimable: 312604 kB' 'Slab: 1125196 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812592 kB' 'KernelStack: 27072 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9516760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234972 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.543 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.543 23:48:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.544 23:48:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.544 23:48:08 -- setup/common.sh@33 -- # echo 0 00:03:38.544 23:48:08 -- setup/common.sh@33 -- # return 0 00:03:38.544 23:48:08 -- setup/hugepages.sh@99 -- # surp=0 00:03:38.544 23:48:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.544 23:48:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.544 23:48:08 -- setup/common.sh@18 -- # local node= 00:03:38.544 23:48:08 -- setup/common.sh@19 -- # local var val 00:03:38.544 23:48:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.544 23:48:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.544 23:48:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.544 23:48:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.544 23:48:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.544 23:48:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.544 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108056264 kB' 'MemAvailable: 111594428 kB' 'Buffers: 4124 kB' 'Cached: 11676916 kB' 'SwapCached: 0 kB' 'Active: 8802620 kB' 'Inactive: 3515796 kB' 'Active(anon): 8112224 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640940 kB' 'Mapped: 213688 kB' 'Shmem: 7474848 kB' 'KReclaimable: 312604 kB' 'Slab: 1125196 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812592 kB' 'KernelStack: 27376 kB' 'PageTables: 10080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9515628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235068 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.545 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.545 23:48:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.546 23:48:08 -- setup/common.sh@33 -- # echo 0 00:03:38.546 23:48:08 -- setup/common.sh@33 -- # return 0 00:03:38.546 23:48:08 -- setup/hugepages.sh@100 -- # resv=0 00:03:38.546 23:48:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.546 nr_hugepages=1024 00:03:38.546 23:48:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.546 resv_hugepages=0 00:03:38.546 23:48:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.546 surplus_hugepages=0 00:03:38.546 23:48:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.546 anon_hugepages=0 00:03:38.546 23:48:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.546 23:48:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.546 23:48:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.546 23:48:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.546 23:48:08 -- setup/common.sh@18 -- # local node= 00:03:38.546 23:48:08 -- setup/common.sh@19 -- # local var val 00:03:38.546 23:48:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.546 23:48:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.546 23:48:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.546 23:48:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.546 23:48:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.546 23:48:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108057844 kB' 'MemAvailable: 111596008 kB' 'Buffers: 4124 kB' 'Cached: 11676928 kB' 'SwapCached: 0 kB' 'Active: 8802440 kB' 'Inactive: 3515796 kB' 'Active(anon): 8112044 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640668 kB' 'Mapped: 213756 kB' 'Shmem: 7474860 kB' 'KReclaimable: 312604 kB' 'Slab: 1125196 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812592 kB' 'KernelStack: 27280 kB' 'PageTables: 9976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9514156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235148 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.546 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.546 23:48:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.547 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.547 23:48:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.547 23:48:08 -- setup/common.sh@33 -- # echo 1024 00:03:38.547 23:48:08 -- setup/common.sh@33 -- # return 0 00:03:38.547 23:48:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.547 23:48:08 -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.547 23:48:08 -- setup/hugepages.sh@27 -- # local node 00:03:38.547 23:48:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.547 23:48:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.547 23:48:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.547 23:48:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:38.547 23:48:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.547 23:48:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.548 23:48:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.548 23:48:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.548 23:48:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.548 23:48:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.548 23:48:08 -- setup/common.sh@18 -- # local node=0 00:03:38.548 23:48:08 -- setup/common.sh@19 -- # local var val 00:03:38.548 23:48:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.548 23:48:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.548 23:48:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.548 23:48:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.548 23:48:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.548 23:48:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58871680 kB' 'MemUsed: 6787328 kB' 'SwapCached: 0 kB' 'Active: 2493888 kB' 'Inactive: 106900 kB' 'Active(anon): 2184368 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508916 kB' 'Mapped: 87644 kB' 'AnonPages: 95060 kB' 'Shmem: 2092496 kB' 'KernelStack: 12504 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163284 kB' 'Slab: 548776 kB' 'SReclaimable: 163284 kB' 'SUnreclaim: 385492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.548 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.548 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.549 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 23:48:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 23:48:08 -- setup/common.sh@32 -- # continue 00:03:38.549 23:48:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 23:48:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 23:48:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 23:48:08 -- setup/common.sh@33 -- # echo 0 00:03:38.549 23:48:08 -- setup/common.sh@33 -- # return 0 00:03:38.549 23:48:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.549 23:48:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.549 23:48:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.549 23:48:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.549 23:48:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:38.549 node0=1024 expecting 1024 00:03:38.549 23:48:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:38.549 23:48:08 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:38.549 23:48:08 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:38.549 23:48:08 -- setup/hugepages.sh@202 -- # setup output 00:03:38.549 23:48:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.549 23:48:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.859 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:41.859 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.859 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.123 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:42.123 23:48:12 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:42.123 23:48:12 -- setup/hugepages.sh@89 -- # local node 00:03:42.123 23:48:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.123 23:48:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.123 23:48:12 -- setup/hugepages.sh@92 -- # local surp 00:03:42.123 23:48:12 -- setup/hugepages.sh@93 -- # local resv 00:03:42.123 23:48:12 -- setup/hugepages.sh@94 -- # local anon 00:03:42.123 23:48:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.123 23:48:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.123 23:48:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.123 23:48:12 -- setup/common.sh@18 -- # local node= 00:03:42.123 23:48:12 -- setup/common.sh@19 -- # local var val 00:03:42.123 23:48:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.123 23:48:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.123 23:48:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.123 23:48:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.123 23:48:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.123 23:48:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108045500 kB' 'MemAvailable: 111583664 kB' 'Buffers: 4124 kB' 'Cached: 11677040 kB' 'SwapCached: 0 kB' 'Active: 8803860 kB' 'Inactive: 3515796 kB' 'Active(anon): 8113464 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642572 kB' 'Mapped: 214192 kB' 'Shmem: 7474972 kB' 'KReclaimable: 312604 kB' 'Slab: 1125452 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812848 kB' 'KernelStack: 27232 kB' 'PageTables: 9532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9515004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234956 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 23:48:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.123 23:48:12 -- setup/common.sh@33 -- # echo 0 00:03:42.123 23:48:12 -- setup/common.sh@33 -- # return 0 00:03:42.123 23:48:12 -- setup/hugepages.sh@97 -- # anon=0 00:03:42.123 23:48:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.123 23:48:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.123 23:48:12 -- setup/common.sh@18 -- # local node= 00:03:42.123 23:48:12 -- setup/common.sh@19 -- # local var val 00:03:42.123 23:48:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.123 23:48:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.123 23:48:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.123 23:48:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.123 23:48:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.123 23:48:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108051480 kB' 'MemAvailable: 111589644 kB' 'Buffers: 4124 kB' 'Cached: 11677040 kB' 'SwapCached: 0 kB' 'Active: 8806348 kB' 'Inactive: 3515796 kB' 'Active(anon): 8115952 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644272 kB' 'Mapped: 214176 kB' 'Shmem: 7474972 kB' 'KReclaimable: 312604 kB' 'Slab: 1125440 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812836 kB' 'KernelStack: 27184 kB' 'PageTables: 9632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9517716 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234928 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 23:48:12 -- setup/common.sh@33 -- # echo 0 00:03:42.124 23:48:12 -- setup/common.sh@33 -- # return 0 00:03:42.124 23:48:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:42.124 23:48:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.124 23:48:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.124 23:48:12 -- setup/common.sh@18 -- # local node= 00:03:42.124 23:48:12 -- setup/common.sh@19 -- # local var val 00:03:42.124 23:48:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.124 23:48:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.124 23:48:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.124 23:48:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.124 23:48:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.124 23:48:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108051484 kB' 'MemAvailable: 111589648 kB' 'Buffers: 4124 kB' 'Cached: 11677056 kB' 'SwapCached: 0 kB' 'Active: 8799908 kB' 'Inactive: 3515796 kB' 'Active(anon): 8109512 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637808 kB' 'Mapped: 213724 kB' 'Shmem: 7474988 kB' 'KReclaimable: 312604 kB' 'Slab: 1125436 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812832 kB' 'KernelStack: 27184 kB' 'PageTables: 9464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9511612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234908 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.124 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.125 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.125 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.125 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.125 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.125 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.125 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.387 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.387 23:48:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.388 23:48:12 -- setup/common.sh@33 -- # echo 0 00:03:42.388 23:48:12 -- setup/common.sh@33 -- # return 0 00:03:42.388 23:48:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.388 23:48:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.388 nr_hugepages=1024 00:03:42.388 23:48:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.388 resv_hugepages=0 00:03:42.388 23:48:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.388 surplus_hugepages=0 00:03:42.388 23:48:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.388 anon_hugepages=0 00:03:42.388 23:48:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.388 23:48:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.388 23:48:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.388 23:48:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.388 23:48:12 -- setup/common.sh@18 -- # local node= 00:03:42.388 23:48:12 -- setup/common.sh@19 -- # local var val 00:03:42.388 23:48:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.388 23:48:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.388 23:48:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.388 23:48:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.388 23:48:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.388 23:48:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.388 23:48:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108050412 kB' 'MemAvailable: 111588576 kB' 'Buffers: 4124 kB' 'Cached: 11677068 kB' 'SwapCached: 0 kB' 'Active: 8799780 kB' 'Inactive: 3515796 kB' 'Active(anon): 8109384 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637692 kB' 'Mapped: 213732 kB' 'Shmem: 7475000 kB' 'KReclaimable: 312604 kB' 'Slab: 1125436 kB' 'SReclaimable: 312604 kB' 'SUnreclaim: 812832 kB' 'KernelStack: 27184 kB' 'PageTables: 9700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9511624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234876 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3728756 kB' 'DirectMap2M: 43137024 kB' 'DirectMap1G: 89128960 kB' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.388 23:48:12 -- setup/common.sh@33 -- # echo 1024 00:03:42.388 23:48:12 -- setup/common.sh@33 -- # return 0 00:03:42.388 23:48:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.388 23:48:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.388 23:48:12 -- setup/hugepages.sh@27 -- # local node 00:03:42.388 23:48:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.388 23:48:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.388 23:48:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.388 23:48:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:42.388 23:48:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.388 23:48:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.388 23:48:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.388 23:48:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.388 23:48:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.388 23:48:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.388 23:48:12 -- setup/common.sh@18 -- # local node=0 00:03:42.388 23:48:12 -- setup/common.sh@19 -- # local var val 00:03:42.388 23:48:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.388 23:48:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.388 23:48:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.388 23:48:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.388 23:48:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.388 23:48:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.388 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.388 23:48:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58879596 kB' 'MemUsed: 6779412 kB' 'SwapCached: 0 kB' 'Active: 2493148 kB' 'Inactive: 106900 kB' 'Active(anon): 2183628 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2508888 kB' 'Mapped: 87656 kB' 'AnonPages: 94320 kB' 'Shmem: 2092468 kB' 'KernelStack: 12392 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 163284 kB' 'Slab: 548716 kB' 'SReclaimable: 163284 kB' 'SUnreclaim: 385432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.388 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # continue 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.389 23:48:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.389 23:48:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.389 23:48:12 -- setup/common.sh@33 -- # echo 0 00:03:42.389 23:48:12 -- setup/common.sh@33 -- # return 0 00:03:42.389 23:48:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.389 23:48:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.389 23:48:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.389 23:48:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.389 23:48:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:42.389 node0=1024 expecting 1024 00:03:42.389 23:48:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.389 00:03:42.389 real 0m7.871s 00:03:42.389 user 0m3.148s 00:03:42.389 sys 0m4.839s 00:03:42.389 23:48:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:42.389 23:48:12 -- common/autotest_common.sh@10 -- # set +x 00:03:42.389 ************************************ 00:03:42.389 END TEST no_shrink_alloc 00:03:42.389 ************************************ 00:03:42.389 23:48:12 -- setup/hugepages.sh@217 -- # clear_hp 00:03:42.389 23:48:12 -- setup/hugepages.sh@37 -- # local node hp 00:03:42.389 23:48:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.389 23:48:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.389 23:48:12 -- setup/hugepages.sh@41 -- # echo 0 00:03:42.389 23:48:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.389 23:48:12 -- setup/hugepages.sh@41 -- # echo 0 00:03:42.389 23:48:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.389 23:48:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.389 23:48:12 -- setup/hugepages.sh@41 -- # echo 0 00:03:42.389 23:48:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.389 23:48:12 -- setup/hugepages.sh@41 -- # echo 0 00:03:42.389 23:48:12 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:42.389 23:48:12 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:42.389 00:03:42.389 real 0m28.385s 00:03:42.389 user 0m11.099s 00:03:42.389 sys 0m17.463s 00:03:42.389 23:48:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:42.389 23:48:12 -- common/autotest_common.sh@10 -- # set +x 00:03:42.389 ************************************ 00:03:42.389 END TEST hugepages 00:03:42.389 ************************************ 00:03:42.389 23:48:12 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:42.389 23:48:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:42.389 23:48:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:42.389 23:48:12 -- common/autotest_common.sh@10 -- # set +x 00:03:42.650 ************************************ 00:03:42.650 START TEST driver 00:03:42.650 ************************************ 00:03:42.650 23:48:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:42.650 * Looking for test storage... 00:03:42.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.650 23:48:12 -- setup/driver.sh@68 -- # setup reset 00:03:42.650 23:48:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.650 23:48:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.937 23:48:17 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:47.937 23:48:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.937 23:48:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.937 23:48:17 -- common/autotest_common.sh@10 -- # set +x 00:03:47.937 ************************************ 00:03:47.937 START TEST guess_driver 00:03:47.937 ************************************ 00:03:47.937 23:48:17 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:47.937 23:48:17 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:47.937 23:48:17 -- setup/driver.sh@47 -- # local fail=0 00:03:47.937 23:48:17 -- setup/driver.sh@49 -- # pick_driver 00:03:47.937 23:48:17 -- setup/driver.sh@36 -- # vfio 00:03:47.937 23:48:17 -- setup/driver.sh@21 -- # local iommu_grups 00:03:47.937 23:48:17 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:47.937 23:48:17 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:47.937 23:48:17 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:47.937 23:48:17 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:47.937 23:48:17 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:03:47.937 23:48:17 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:47.937 23:48:17 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:47.937 23:48:17 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:47.937 23:48:17 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:47.937 23:48:17 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:47.937 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:47.937 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:47.937 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:47.937 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:47.937 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:47.937 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:47.938 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:47.938 23:48:17 -- setup/driver.sh@30 -- # return 0 00:03:47.938 23:48:17 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:47.938 23:48:17 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:47.938 23:48:17 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:47.938 23:48:17 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:47.938 Looking for driver=vfio-pci 00:03:47.938 23:48:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.938 23:48:17 -- setup/driver.sh@45 -- # setup output config 00:03:47.938 23:48:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.938 23:48:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.480 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.480 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.480 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.481 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.481 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.481 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.481 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.481 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.481 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.481 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.481 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.481 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.481 23:48:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.481 23:48:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.481 23:48:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.741 23:48:20 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:50.741 23:48:20 -- setup/driver.sh@65 -- # setup reset 00:03:50.741 23:48:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.741 23:48:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.942 00:03:54.943 real 0m7.734s 00:03:54.943 user 0m2.194s 00:03:54.943 sys 0m4.532s 00:03:54.943 23:48:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:54.943 23:48:25 -- common/autotest_common.sh@10 -- # set +x 00:03:54.943 ************************************ 00:03:54.943 END TEST guess_driver 00:03:54.943 ************************************ 00:03:55.203 00:03:55.203 real 0m12.526s 00:03:55.203 user 0m3.521s 00:03:55.203 sys 0m7.095s 00:03:55.203 23:48:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:55.203 23:48:25 -- common/autotest_common.sh@10 -- # set +x 00:03:55.203 ************************************ 00:03:55.203 END TEST driver 00:03:55.203 ************************************ 00:03:55.203 23:48:25 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:55.203 23:48:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.203 23:48:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.203 23:48:25 -- common/autotest_common.sh@10 -- # set +x 00:03:55.203 ************************************ 00:03:55.203 START TEST devices 00:03:55.203 ************************************ 00:03:55.203 23:48:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:55.464 * Looking for test storage... 00:03:55.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:55.464 23:48:25 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:55.464 23:48:25 -- setup/devices.sh@192 -- # setup reset 00:03:55.464 23:48:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.464 23:48:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.671 23:48:29 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:59.671 23:48:29 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:59.671 23:48:29 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:59.671 23:48:29 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:59.671 23:48:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.671 23:48:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:59.671 23:48:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:59.671 23:48:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.671 23:48:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.671 23:48:29 -- setup/devices.sh@196 -- # blocks=() 00:03:59.671 23:48:29 -- setup/devices.sh@196 -- # declare -a blocks 00:03:59.671 23:48:29 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:59.671 23:48:29 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:59.671 23:48:29 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:59.671 23:48:29 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.671 23:48:29 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:59.671 23:48:29 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:59.671 23:48:29 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:03:59.671 23:48:29 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:59.671 23:48:29 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:59.671 23:48:29 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:59.671 23:48:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:59.671 No valid GPT data, bailing 00:03:59.671 23:48:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.671 23:48:29 -- scripts/common.sh@391 -- # pt= 00:03:59.671 23:48:29 -- scripts/common.sh@392 -- # return 1 00:03:59.671 23:48:29 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:59.671 23:48:29 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:59.671 23:48:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:59.671 23:48:29 -- setup/common.sh@80 -- # echo 1920383410176 00:03:59.671 23:48:29 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:59.671 23:48:29 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.671 23:48:29 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:03:59.671 23:48:29 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:59.671 23:48:29 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:59.671 23:48:29 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:59.672 23:48:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.672 23:48:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.672 23:48:29 -- common/autotest_common.sh@10 -- # set +x 00:03:59.672 ************************************ 00:03:59.672 START TEST nvme_mount 00:03:59.672 ************************************ 00:03:59.672 23:48:29 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:59.672 23:48:29 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:59.672 23:48:29 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:59.672 23:48:29 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.672 23:48:29 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.672 23:48:29 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:59.672 23:48:29 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:59.672 23:48:29 -- setup/common.sh@40 -- # local part_no=1 00:03:59.672 23:48:29 -- setup/common.sh@41 -- # local size=1073741824 00:03:59.672 23:48:29 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:59.672 23:48:29 -- setup/common.sh@44 -- # parts=() 00:03:59.672 23:48:29 -- setup/common.sh@44 -- # local parts 00:03:59.672 23:48:29 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:59.672 23:48:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.672 23:48:29 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.672 23:48:29 -- setup/common.sh@46 -- # (( part++ )) 00:03:59.672 23:48:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.672 23:48:29 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:59.672 23:48:29 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:59.672 23:48:29 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:00.615 Creating new GPT entries in memory. 00:04:00.615 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.615 other utilities. 00:04:00.615 23:48:30 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.615 23:48:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.615 23:48:30 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.615 23:48:30 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.615 23:48:30 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:02.000 Creating new GPT entries in memory. 00:04:02.000 The operation has completed successfully. 00:04:02.000 23:48:31 -- setup/common.sh@57 -- # (( part++ )) 00:04:02.000 23:48:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.000 23:48:31 -- setup/common.sh@62 -- # wait 162738 00:04:02.000 23:48:31 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.000 23:48:31 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:02.000 23:48:31 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.000 23:48:31 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:02.000 23:48:31 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:02.000 23:48:31 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.000 23:48:31 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.000 23:48:31 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:02.000 23:48:31 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:02.000 23:48:31 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.000 23:48:31 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.000 23:48:31 -- setup/devices.sh@53 -- # local found=0 00:04:02.001 23:48:31 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.001 23:48:31 -- setup/devices.sh@56 -- # : 00:04:02.001 23:48:31 -- setup/devices.sh@59 -- # local pci status 00:04:02.001 23:48:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.001 23:48:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:02.001 23:48:31 -- setup/devices.sh@47 -- # setup output config 00:04:02.001 23:48:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.001 23:48:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:04.548 23:48:34 -- setup/devices.sh@63 -- # found=1 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.548 23:48:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:04.548 23:48:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.119 23:48:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.119 23:48:35 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:05.119 23:48:35 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.119 23:48:35 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.119 23:48:35 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.119 23:48:35 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:05.119 23:48:35 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.119 23:48:35 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.119 23:48:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.119 23:48:35 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:05.119 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:05.119 23:48:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:05.119 23:48:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:05.119 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:05.119 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:05.119 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:05.119 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:05.119 23:48:35 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:05.119 23:48:35 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:05.119 23:48:35 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.119 23:48:35 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:05.119 23:48:35 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:05.380 23:48:35 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.380 23:48:35 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.380 23:48:35 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:05.380 23:48:35 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:05.380 23:48:35 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.380 23:48:35 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.380 23:48:35 -- setup/devices.sh@53 -- # local found=0 00:04:05.380 23:48:35 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.380 23:48:35 -- setup/devices.sh@56 -- # : 00:04:05.380 23:48:35 -- setup/devices.sh@59 -- # local pci status 00:04:05.380 23:48:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.380 23:48:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:05.380 23:48:35 -- setup/devices.sh@47 -- # setup output config 00:04:05.380 23:48:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.380 23:48:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:08.751 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:08.752 23:48:38 -- setup/devices.sh@63 -- # found=1 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.752 23:48:38 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:08.752 23:48:38 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.752 23:48:38 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.752 23:48:38 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:08.752 23:48:38 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.752 23:48:38 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:08.752 23:48:38 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:08.752 23:48:38 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:08.752 23:48:38 -- setup/devices.sh@50 -- # local mount_point= 00:04:08.752 23:48:38 -- setup/devices.sh@51 -- # local test_file= 00:04:08.752 23:48:38 -- setup/devices.sh@53 -- # local found=0 00:04:08.752 23:48:38 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:08.752 23:48:38 -- setup/devices.sh@59 -- # local pci status 00:04:08.752 23:48:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.752 23:48:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:08.752 23:48:38 -- setup/devices.sh@47 -- # setup output config 00:04:08.752 23:48:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.752 23:48:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:12.054 23:48:42 -- setup/devices.sh@63 -- # found=1 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.054 23:48:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.054 23:48:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.625 23:48:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.625 23:48:42 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:12.625 23:48:42 -- setup/devices.sh@68 -- # return 0 00:04:12.625 23:48:42 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:12.625 23:48:42 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.625 23:48:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.625 23:48:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.625 23:48:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.625 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.625 00:04:12.625 real 0m12.794s 00:04:12.625 user 0m3.903s 00:04:12.625 sys 0m6.669s 00:04:12.625 23:48:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.625 23:48:42 -- common/autotest_common.sh@10 -- # set +x 00:04:12.625 ************************************ 00:04:12.625 END TEST nvme_mount 00:04:12.625 ************************************ 00:04:12.625 23:48:42 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:12.625 23:48:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.625 23:48:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.625 23:48:42 -- common/autotest_common.sh@10 -- # set +x 00:04:12.625 ************************************ 00:04:12.625 START TEST dm_mount 00:04:12.625 ************************************ 00:04:12.625 23:48:42 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:12.625 23:48:42 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:12.625 23:48:42 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:12.625 23:48:42 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:12.625 23:48:42 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:12.625 23:48:42 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:12.625 23:48:42 -- setup/common.sh@40 -- # local part_no=2 00:04:12.625 23:48:42 -- setup/common.sh@41 -- # local size=1073741824 00:04:12.625 23:48:42 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:12.625 23:48:42 -- setup/common.sh@44 -- # parts=() 00:04:12.625 23:48:42 -- setup/common.sh@44 -- # local parts 00:04:12.625 23:48:42 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:12.625 23:48:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.625 23:48:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:12.625 23:48:42 -- setup/common.sh@46 -- # (( part++ )) 00:04:12.625 23:48:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.625 23:48:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:12.625 23:48:42 -- setup/common.sh@46 -- # (( part++ )) 00:04:12.625 23:48:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.625 23:48:42 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:12.625 23:48:42 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:12.625 23:48:42 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:13.566 Creating new GPT entries in memory. 00:04:13.566 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:13.566 other utilities. 00:04:13.827 23:48:43 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:13.827 23:48:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.827 23:48:43 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.827 23:48:43 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.827 23:48:43 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:14.768 Creating new GPT entries in memory. 00:04:14.768 The operation has completed successfully. 00:04:14.768 23:48:44 -- setup/common.sh@57 -- # (( part++ )) 00:04:14.768 23:48:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.768 23:48:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.768 23:48:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.768 23:48:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:15.743 The operation has completed successfully. 00:04:15.743 23:48:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:15.743 23:48:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.743 23:48:45 -- setup/common.sh@62 -- # wait 167835 00:04:15.743 23:48:45 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:15.743 23:48:45 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.743 23:48:45 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:15.743 23:48:45 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:15.743 23:48:45 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:15.743 23:48:45 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:15.743 23:48:45 -- setup/devices.sh@161 -- # break 00:04:15.743 23:48:45 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:15.743 23:48:45 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:15.743 23:48:45 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:15.743 23:48:45 -- setup/devices.sh@166 -- # dm=dm-1 00:04:15.743 23:48:45 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:15.743 23:48:45 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:15.743 23:48:45 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.743 23:48:45 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:15.743 23:48:45 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.743 23:48:45 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:15.743 23:48:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:15.743 23:48:45 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.743 23:48:45 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:15.743 23:48:45 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:15.743 23:48:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:15.743 23:48:45 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.743 23:48:45 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:15.743 23:48:45 -- setup/devices.sh@53 -- # local found=0 00:04:15.743 23:48:45 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:15.743 23:48:45 -- setup/devices.sh@56 -- # : 00:04:15.743 23:48:45 -- setup/devices.sh@59 -- # local pci status 00:04:15.743 23:48:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.743 23:48:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:15.743 23:48:45 -- setup/devices.sh@47 -- # setup output config 00:04:15.743 23:48:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.743 23:48:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:19.043 23:48:49 -- setup/devices.sh@63 -- # found=1 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.043 23:48:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.043 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.615 23:48:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.615 23:48:49 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:19.615 23:48:49 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.615 23:48:49 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.615 23:48:49 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.615 23:48:49 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.615 23:48:49 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:19.615 23:48:49 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:19.615 23:48:49 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:19.615 23:48:49 -- setup/devices.sh@50 -- # local mount_point= 00:04:19.615 23:48:49 -- setup/devices.sh@51 -- # local test_file= 00:04:19.615 23:48:49 -- setup/devices.sh@53 -- # local found=0 00:04:19.615 23:48:49 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.615 23:48:49 -- setup/devices.sh@59 -- # local pci status 00:04:19.615 23:48:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.615 23:48:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:19.615 23:48:49 -- setup/devices.sh@47 -- # setup output config 00:04:19.615 23:48:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.615 23:48:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:22.914 23:48:52 -- setup/devices.sh@63 -- # found=1 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.914 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.915 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.915 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.915 23:48:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.915 23:48:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.915 23:48:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.915 23:48:52 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:22.915 23:48:52 -- setup/devices.sh@68 -- # return 0 00:04:22.915 23:48:52 -- setup/devices.sh@187 -- # cleanup_dm 00:04:22.915 23:48:52 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.915 23:48:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:22.915 23:48:52 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:22.915 23:48:53 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.915 23:48:53 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:22.915 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.915 23:48:53 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:22.915 23:48:53 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:22.915 00:04:22.915 real 0m10.245s 00:04:22.915 user 0m2.643s 00:04:22.915 sys 0m4.569s 00:04:22.915 23:48:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.915 23:48:53 -- common/autotest_common.sh@10 -- # set +x 00:04:22.915 ************************************ 00:04:22.915 END TEST dm_mount 00:04:22.915 ************************************ 00:04:22.915 23:48:53 -- setup/devices.sh@1 -- # cleanup 00:04:22.915 23:48:53 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:22.915 23:48:53 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.915 23:48:53 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.915 23:48:53 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:22.915 23:48:53 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.915 23:48:53 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.175 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:23.175 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:23.175 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:23.175 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:23.175 23:48:53 -- setup/devices.sh@12 -- # cleanup_dm 00:04:23.175 23:48:53 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.175 23:48:53 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:23.175 23:48:53 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.175 23:48:53 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:23.175 23:48:53 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.175 23:48:53 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:23.175 00:04:23.175 real 0m27.983s 00:04:23.175 user 0m8.363s 00:04:23.175 sys 0m14.195s 00:04:23.175 23:48:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.175 23:48:53 -- common/autotest_common.sh@10 -- # set +x 00:04:23.175 ************************************ 00:04:23.175 END TEST devices 00:04:23.175 ************************************ 00:04:23.175 00:04:23.175 real 1m34.471s 00:04:23.175 user 0m31.106s 00:04:23.175 sys 0m53.575s 00:04:23.175 23:48:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.175 23:48:53 -- common/autotest_common.sh@10 -- # set +x 00:04:23.175 ************************************ 00:04:23.175 END TEST setup.sh 00:04:23.175 ************************************ 00:04:23.436 23:48:53 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:26.734 Hugepages 00:04:26.734 node hugesize free / total 00:04:26.734 node0 1048576kB 0 / 0 00:04:26.734 node0 2048kB 2048 / 2048 00:04:26.734 node1 1048576kB 0 / 0 00:04:26.734 node1 2048kB 0 / 0 00:04:26.734 00:04:26.734 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:26.734 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:26.734 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:26.734 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:26.734 23:48:56 -- spdk/autotest.sh@130 -- # uname -s 00:04:26.734 23:48:56 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:26.734 23:48:56 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:26.734 23:48:56 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.037 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:30.037 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:31.949 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:32.209 23:49:02 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:33.153 23:49:03 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:33.153 23:49:03 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:33.153 23:49:03 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:33.153 23:49:03 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:33.153 23:49:03 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:33.153 23:49:03 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:33.153 23:49:03 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.153 23:49:03 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:33.153 23:49:03 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:33.153 23:49:03 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:33.153 23:49:03 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:33.153 23:49:03 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.461 Waiting for block devices as requested 00:04:36.461 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:36.461 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:36.722 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:36.722 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:36.722 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:36.984 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:36.984 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:36.984 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:36.984 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:37.245 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:37.245 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:37.506 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:37.506 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:37.506 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:37.506 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:37.767 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:37.767 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:38.028 23:49:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:38.028 23:49:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:38.028 23:49:08 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:38.028 23:49:08 -- common/autotest_common.sh@1488 -- # grep 0000:65:00.0/nvme/nvme 00:04:38.028 23:49:08 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:38.028 23:49:08 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:38.028 23:49:08 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:38.028 23:49:08 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:38.028 23:49:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:38.028 23:49:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:38.028 23:49:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:38.028 23:49:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:38.028 23:49:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:38.028 23:49:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:38.028 23:49:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:38.028 23:49:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:38.028 23:49:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:38.028 23:49:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:38.028 23:49:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:38.028 23:49:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:38.028 23:49:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:38.028 23:49:08 -- common/autotest_common.sh@1543 -- # continue 00:04:38.028 23:49:08 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:38.028 23:49:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:38.028 23:49:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.028 23:49:08 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:38.028 23:49:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:38.028 23:49:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.028 23:49:08 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.334 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:41.334 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:41.334 23:49:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:41.334 23:49:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:41.334 23:49:11 -- common/autotest_common.sh@10 -- # set +x 00:04:41.334 23:49:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:41.334 23:49:11 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:41.334 23:49:11 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.334 23:49:11 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:41.334 23:49:11 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:41.334 23:49:11 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:41.334 23:49:11 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:41.334 23:49:11 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:41.334 23:49:11 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.334 23:49:11 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:41.334 23:49:11 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:41.334 23:49:11 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:41.334 23:49:11 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:41.334 23:49:11 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:41.334 23:49:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:41.334 23:49:11 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:41.334 23:49:11 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:41.334 23:49:11 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:04:41.334 23:49:11 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:04:41.334 23:49:11 -- common/autotest_common.sh@1579 -- # return 0 00:04:41.334 23:49:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:41.334 23:49:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:41.334 23:49:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:41.334 23:49:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:41.334 23:49:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:41.334 23:49:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:41.334 23:49:11 -- common/autotest_common.sh@10 -- # set +x 00:04:41.334 23:49:11 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.334 23:49:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.334 23:49:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.334 23:49:11 -- common/autotest_common.sh@10 -- # set +x 00:04:41.596 ************************************ 00:04:41.596 START TEST env 00:04:41.596 ************************************ 00:04:41.596 23:49:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:41.596 * Looking for test storage... 00:04:41.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:41.596 23:49:11 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.596 23:49:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.596 23:49:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.596 23:49:11 -- common/autotest_common.sh@10 -- # set +x 00:04:41.859 ************************************ 00:04:41.859 START TEST env_memory 00:04:41.859 ************************************ 00:04:41.859 23:49:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:41.859 00:04:41.859 00:04:41.859 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.859 http://cunit.sourceforge.net/ 00:04:41.859 00:04:41.859 00:04:41.859 Suite: memory 00:04:41.859 Test: alloc and free memory map ...[2024-04-26 23:49:11.933991] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:41.859 passed 00:04:41.859 Test: mem map translation ...[2024-04-26 23:49:11.959543] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:41.859 [2024-04-26 23:49:11.959572] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:41.859 [2024-04-26 23:49:11.959620] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:41.859 [2024-04-26 23:49:11.959628] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:41.859 passed 00:04:41.859 Test: mem map registration ...[2024-04-26 23:49:12.014817] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:41.859 [2024-04-26 23:49:12.014859] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:41.859 passed 00:04:42.122 Test: mem map adjacent registrations ...passed 00:04:42.122 00:04:42.122 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.122 suites 1 1 n/a 0 0 00:04:42.122 tests 4 4 4 0 0 00:04:42.122 asserts 152 152 152 0 n/a 00:04:42.122 00:04:42.122 Elapsed time = 0.192 seconds 00:04:42.122 00:04:42.122 real 0m0.206s 00:04:42.122 user 0m0.193s 00:04:42.122 sys 0m0.012s 00:04:42.122 23:49:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.122 23:49:12 -- common/autotest_common.sh@10 -- # set +x 00:04:42.122 ************************************ 00:04:42.122 END TEST env_memory 00:04:42.122 ************************************ 00:04:42.122 23:49:12 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.122 23:49:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.122 23:49:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.122 23:49:12 -- common/autotest_common.sh@10 -- # set +x 00:04:42.122 ************************************ 00:04:42.122 START TEST env_vtophys 00:04:42.122 ************************************ 00:04:42.122 23:49:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:42.122 EAL: lib.eal log level changed from notice to debug 00:04:42.122 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.122 EAL: Detected lcore 1 as core 1 on socket 0 00:04:42.122 EAL: Detected lcore 2 as core 2 on socket 0 00:04:42.122 EAL: Detected lcore 3 as core 3 on socket 0 00:04:42.122 EAL: Detected lcore 4 as core 4 on socket 0 00:04:42.122 EAL: Detected lcore 5 as core 5 on socket 0 00:04:42.122 EAL: Detected lcore 6 as core 6 on socket 0 00:04:42.122 EAL: Detected lcore 7 as core 7 on socket 0 00:04:42.122 EAL: Detected lcore 8 as core 8 on socket 0 00:04:42.122 EAL: Detected lcore 9 as core 9 on socket 0 00:04:42.122 EAL: Detected lcore 10 as core 10 on socket 0 00:04:42.122 EAL: Detected lcore 11 as core 11 on socket 0 00:04:42.122 EAL: Detected lcore 12 as core 12 on socket 0 00:04:42.122 EAL: Detected lcore 13 as core 13 on socket 0 00:04:42.122 EAL: Detected lcore 14 as core 14 on socket 0 00:04:42.122 EAL: Detected lcore 15 as core 15 on socket 0 00:04:42.122 EAL: Detected lcore 16 as core 16 on socket 0 00:04:42.122 EAL: Detected lcore 17 as core 17 on socket 0 00:04:42.122 EAL: Detected lcore 18 as core 18 on socket 0 00:04:42.122 EAL: Detected lcore 19 as core 19 on socket 0 00:04:42.122 EAL: Detected lcore 20 as core 20 on socket 0 00:04:42.122 EAL: Detected lcore 21 as core 21 on socket 0 00:04:42.122 EAL: Detected lcore 22 as core 22 on socket 0 00:04:42.122 EAL: Detected lcore 23 as core 23 on socket 0 00:04:42.122 EAL: Detected lcore 24 as core 24 on socket 0 00:04:42.122 EAL: Detected lcore 25 as core 25 on socket 0 00:04:42.122 EAL: Detected lcore 26 as core 26 on socket 0 00:04:42.122 EAL: Detected lcore 27 as core 27 on socket 0 00:04:42.122 EAL: Detected lcore 28 as core 28 on socket 0 00:04:42.122 EAL: Detected lcore 29 as core 29 on socket 0 00:04:42.122 EAL: Detected lcore 30 as core 30 on socket 0 00:04:42.122 EAL: Detected lcore 31 as core 31 on socket 0 00:04:42.122 EAL: Detected lcore 32 as core 32 on socket 0 00:04:42.122 EAL: Detected lcore 33 as core 33 on socket 0 00:04:42.122 EAL: Detected lcore 34 as core 34 on socket 0 00:04:42.122 EAL: Detected lcore 35 as core 35 on socket 0 00:04:42.122 EAL: Detected lcore 36 as core 0 on socket 1 00:04:42.122 EAL: Detected lcore 37 as core 1 on socket 1 00:04:42.122 EAL: Detected lcore 38 as core 2 on socket 1 00:04:42.122 EAL: Detected lcore 39 as core 3 on socket 1 00:04:42.122 EAL: Detected lcore 40 as core 4 on socket 1 00:04:42.122 EAL: Detected lcore 41 as core 5 on socket 1 00:04:42.122 EAL: Detected lcore 42 as core 6 on socket 1 00:04:42.122 EAL: Detected lcore 43 as core 7 on socket 1 00:04:42.122 EAL: Detected lcore 44 as core 8 on socket 1 00:04:42.122 EAL: Detected lcore 45 as core 9 on socket 1 00:04:42.122 EAL: Detected lcore 46 as core 10 on socket 1 00:04:42.122 EAL: Detected lcore 47 as core 11 on socket 1 00:04:42.122 EAL: Detected lcore 48 as core 12 on socket 1 00:04:42.122 EAL: Detected lcore 49 as core 13 on socket 1 00:04:42.122 EAL: Detected lcore 50 as core 14 on socket 1 00:04:42.122 EAL: Detected lcore 51 as core 15 on socket 1 00:04:42.122 EAL: Detected lcore 52 as core 16 on socket 1 00:04:42.122 EAL: Detected lcore 53 as core 17 on socket 1 00:04:42.122 EAL: Detected lcore 54 as core 18 on socket 1 00:04:42.122 EAL: Detected lcore 55 as core 19 on socket 1 00:04:42.122 EAL: Detected lcore 56 as core 20 on socket 1 00:04:42.122 EAL: Detected lcore 57 as core 21 on socket 1 00:04:42.122 EAL: Detected lcore 58 as core 22 on socket 1 00:04:42.122 EAL: Detected lcore 59 as core 23 on socket 1 00:04:42.122 EAL: Detected lcore 60 as core 24 on socket 1 00:04:42.122 EAL: Detected lcore 61 as core 25 on socket 1 00:04:42.122 EAL: Detected lcore 62 as core 26 on socket 1 00:04:42.122 EAL: Detected lcore 63 as core 27 on socket 1 00:04:42.122 EAL: Detected lcore 64 as core 28 on socket 1 00:04:42.122 EAL: Detected lcore 65 as core 29 on socket 1 00:04:42.122 EAL: Detected lcore 66 as core 30 on socket 1 00:04:42.122 EAL: Detected lcore 67 as core 31 on socket 1 00:04:42.122 EAL: Detected lcore 68 as core 32 on socket 1 00:04:42.122 EAL: Detected lcore 69 as core 33 on socket 1 00:04:42.122 EAL: Detected lcore 70 as core 34 on socket 1 00:04:42.122 EAL: Detected lcore 71 as core 35 on socket 1 00:04:42.122 EAL: Detected lcore 72 as core 0 on socket 0 00:04:42.122 EAL: Detected lcore 73 as core 1 on socket 0 00:04:42.122 EAL: Detected lcore 74 as core 2 on socket 0 00:04:42.122 EAL: Detected lcore 75 as core 3 on socket 0 00:04:42.122 EAL: Detected lcore 76 as core 4 on socket 0 00:04:42.122 EAL: Detected lcore 77 as core 5 on socket 0 00:04:42.122 EAL: Detected lcore 78 as core 6 on socket 0 00:04:42.122 EAL: Detected lcore 79 as core 7 on socket 0 00:04:42.122 EAL: Detected lcore 80 as core 8 on socket 0 00:04:42.122 EAL: Detected lcore 81 as core 9 on socket 0 00:04:42.122 EAL: Detected lcore 82 as core 10 on socket 0 00:04:42.122 EAL: Detected lcore 83 as core 11 on socket 0 00:04:42.122 EAL: Detected lcore 84 as core 12 on socket 0 00:04:42.122 EAL: Detected lcore 85 as core 13 on socket 0 00:04:42.122 EAL: Detected lcore 86 as core 14 on socket 0 00:04:42.122 EAL: Detected lcore 87 as core 15 on socket 0 00:04:42.122 EAL: Detected lcore 88 as core 16 on socket 0 00:04:42.122 EAL: Detected lcore 89 as core 17 on socket 0 00:04:42.122 EAL: Detected lcore 90 as core 18 on socket 0 00:04:42.122 EAL: Detected lcore 91 as core 19 on socket 0 00:04:42.122 EAL: Detected lcore 92 as core 20 on socket 0 00:04:42.122 EAL: Detected lcore 93 as core 21 on socket 0 00:04:42.122 EAL: Detected lcore 94 as core 22 on socket 0 00:04:42.122 EAL: Detected lcore 95 as core 23 on socket 0 00:04:42.122 EAL: Detected lcore 96 as core 24 on socket 0 00:04:42.122 EAL: Detected lcore 97 as core 25 on socket 0 00:04:42.122 EAL: Detected lcore 98 as core 26 on socket 0 00:04:42.122 EAL: Detected lcore 99 as core 27 on socket 0 00:04:42.122 EAL: Detected lcore 100 as core 28 on socket 0 00:04:42.122 EAL: Detected lcore 101 as core 29 on socket 0 00:04:42.122 EAL: Detected lcore 102 as core 30 on socket 0 00:04:42.122 EAL: Detected lcore 103 as core 31 on socket 0 00:04:42.122 EAL: Detected lcore 104 as core 32 on socket 0 00:04:42.122 EAL: Detected lcore 105 as core 33 on socket 0 00:04:42.122 EAL: Detected lcore 106 as core 34 on socket 0 00:04:42.122 EAL: Detected lcore 107 as core 35 on socket 0 00:04:42.122 EAL: Detected lcore 108 as core 0 on socket 1 00:04:42.122 EAL: Detected lcore 109 as core 1 on socket 1 00:04:42.122 EAL: Detected lcore 110 as core 2 on socket 1 00:04:42.122 EAL: Detected lcore 111 as core 3 on socket 1 00:04:42.122 EAL: Detected lcore 112 as core 4 on socket 1 00:04:42.122 EAL: Detected lcore 113 as core 5 on socket 1 00:04:42.122 EAL: Detected lcore 114 as core 6 on socket 1 00:04:42.122 EAL: Detected lcore 115 as core 7 on socket 1 00:04:42.122 EAL: Detected lcore 116 as core 8 on socket 1 00:04:42.122 EAL: Detected lcore 117 as core 9 on socket 1 00:04:42.122 EAL: Detected lcore 118 as core 10 on socket 1 00:04:42.122 EAL: Detected lcore 119 as core 11 on socket 1 00:04:42.123 EAL: Detected lcore 120 as core 12 on socket 1 00:04:42.123 EAL: Detected lcore 121 as core 13 on socket 1 00:04:42.123 EAL: Detected lcore 122 as core 14 on socket 1 00:04:42.123 EAL: Detected lcore 123 as core 15 on socket 1 00:04:42.123 EAL: Detected lcore 124 as core 16 on socket 1 00:04:42.123 EAL: Detected lcore 125 as core 17 on socket 1 00:04:42.123 EAL: Detected lcore 126 as core 18 on socket 1 00:04:42.123 EAL: Detected lcore 127 as core 19 on socket 1 00:04:42.123 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:42.123 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:42.123 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:42.123 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:42.123 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:42.123 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:42.123 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:42.123 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:42.123 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:42.123 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:42.123 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:42.123 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:42.123 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:42.123 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:42.123 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:42.123 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:42.123 EAL: Maximum logical cores by configuration: 128 00:04:42.123 EAL: Detected CPU lcores: 128 00:04:42.123 EAL: Detected NUMA nodes: 2 00:04:42.123 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:42.123 EAL: Detected shared linkage of DPDK 00:04:42.123 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.123 EAL: Bus pci wants IOVA as 'DC' 00:04:42.123 EAL: Buses did not request a specific IOVA mode. 00:04:42.123 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:42.123 EAL: Selected IOVA mode 'VA' 00:04:42.123 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.123 EAL: Probing VFIO support... 00:04:42.123 EAL: IOMMU type 1 (Type 1) is supported 00:04:42.123 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:42.123 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:42.123 EAL: VFIO support initialized 00:04:42.123 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.123 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.123 EAL: Setting up physically contiguous memory... 00:04:42.123 EAL: Setting maximum number of open files to 524288 00:04:42.123 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.123 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:42.123 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.123 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.123 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.123 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.123 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.123 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.123 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.123 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.123 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.123 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.123 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.123 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.123 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.123 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:42.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.123 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:42.123 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.123 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:42.123 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:42.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.123 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:42.123 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.123 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:42.123 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:42.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.123 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:42.123 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.123 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:42.123 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:42.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.123 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:42.123 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:42.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.123 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:42.123 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:42.123 EAL: Hugepages will be freed exactly as allocated. 00:04:42.123 EAL: No shared files mode enabled, IPC is disabled 00:04:42.123 EAL: No shared files mode enabled, IPC is disabled 00:04:42.123 EAL: TSC frequency is ~2400000 KHz 00:04:42.123 EAL: Main lcore 0 is ready (tid=7f1d3952ea00;cpuset=[0]) 00:04:42.123 EAL: Trying to obtain current memory policy. 00:04:42.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.123 EAL: Restoring previous memory policy: 0 00:04:42.123 EAL: request: mp_malloc_sync 00:04:42.123 EAL: No shared files mode enabled, IPC is disabled 00:04:42.123 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.123 EAL: No shared files mode enabled, IPC is disabled 00:04:42.385 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:42.385 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.385 00:04:42.385 00:04:42.385 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.385 http://cunit.sourceforge.net/ 00:04:42.385 00:04:42.385 00:04:42.385 Suite: components_suite 00:04:42.385 Test: vtophys_malloc_test ...passed 00:04:42.385 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.385 EAL: Restoring previous memory policy: 4 00:04:42.385 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.385 EAL: request: mp_malloc_sync 00:04:42.385 EAL: No shared files mode enabled, IPC is disabled 00:04:42.385 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.385 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.385 EAL: request: mp_malloc_sync 00:04:42.385 EAL: No shared files mode enabled, IPC is disabled 00:04:42.385 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.385 EAL: Trying to obtain current memory policy. 00:04:42.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.385 EAL: Restoring previous memory policy: 4 00:04:42.385 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.385 EAL: request: mp_malloc_sync 00:04:42.385 EAL: No shared files mode enabled, IPC is disabled 00:04:42.385 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.385 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.385 EAL: request: mp_malloc_sync 00:04:42.385 EAL: No shared files mode enabled, IPC is disabled 00:04:42.385 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.385 EAL: Trying to obtain current memory policy. 00:04:42.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.385 EAL: Restoring previous memory policy: 4 00:04:42.385 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.385 EAL: request: mp_malloc_sync 00:04:42.385 EAL: No shared files mode enabled, IPC is disabled 00:04:42.385 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.385 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.385 EAL: request: mp_malloc_sync 00:04:42.385 EAL: No shared files mode enabled, IPC is disabled 00:04:42.385 EAL: Heap on socket 0 was shrunk by 10MB 00:04:42.386 EAL: Trying to obtain current memory policy. 00:04:42.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.386 EAL: Restoring previous memory policy: 4 00:04:42.386 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.386 EAL: request: mp_malloc_sync 00:04:42.386 EAL: No shared files mode enabled, IPC is disabled 00:04:42.386 EAL: Heap on socket 0 was expanded by 18MB 00:04:42.386 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.386 EAL: request: mp_malloc_sync 00:04:42.386 EAL: No shared files mode enabled, IPC is disabled 00:04:42.386 EAL: Heap on socket 0 was shrunk by 18MB 00:04:42.386 EAL: Trying to obtain current memory policy. 00:04:42.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.386 EAL: Restoring previous memory policy: 4 00:04:42.386 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.386 EAL: request: mp_malloc_sync 00:04:42.386 EAL: No shared files mode enabled, IPC is disabled 00:04:42.386 EAL: Heap on socket 0 was expanded by 34MB 00:04:42.386 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.386 EAL: request: mp_malloc_sync 00:04:42.386 EAL: No shared files mode enabled, IPC is disabled 00:04:42.386 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.386 EAL: Trying to obtain current memory policy. 00:04:42.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.386 EAL: Restoring previous memory policy: 4 00:04:42.386 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.386 EAL: request: mp_malloc_sync 00:04:42.386 EAL: No shared files mode enabled, IPC is disabled 00:04:42.386 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.386 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.386 EAL: request: mp_malloc_sync 00:04:42.386 EAL: No shared files mode enabled, IPC is disabled 00:04:42.386 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.386 EAL: Trying to obtain current memory policy. 00:04:42.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.386 EAL: Restoring previous memory policy: 4 00:04:42.386 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.386 EAL: request: mp_malloc_sync 00:04:42.386 EAL: No shared files mode enabled, IPC is disabled 00:04:42.386 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.386 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.386 EAL: request: mp_malloc_sync 00:04:42.386 EAL: No shared files mode enabled, IPC is disabled 00:04:42.386 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.386 EAL: Trying to obtain current memory policy. 00:04:42.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.386 EAL: Restoring previous memory policy: 4 00:04:42.386 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.386 EAL: request: mp_malloc_sync 00:04:42.386 EAL: No shared files mode enabled, IPC is disabled 00:04:42.386 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.386 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.386 EAL: request: mp_malloc_sync 00:04:42.386 EAL: No shared files mode enabled, IPC is disabled 00:04:42.386 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.386 EAL: Trying to obtain current memory policy. 00:04:42.386 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.701 EAL: Restoring previous memory policy: 4 00:04:42.701 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.701 EAL: request: mp_malloc_sync 00:04:42.701 EAL: No shared files mode enabled, IPC is disabled 00:04:42.701 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.701 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.701 EAL: request: mp_malloc_sync 00:04:42.701 EAL: No shared files mode enabled, IPC is disabled 00:04:42.701 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.701 EAL: Trying to obtain current memory policy. 00:04:42.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.701 EAL: Restoring previous memory policy: 4 00:04:42.701 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.701 EAL: request: mp_malloc_sync 00:04:42.701 EAL: No shared files mode enabled, IPC is disabled 00:04:42.701 EAL: Heap on socket 0 was expanded by 1026MB 00:04:43.008 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.008 EAL: request: mp_malloc_sync 00:04:43.008 EAL: No shared files mode enabled, IPC is disabled 00:04:43.008 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:43.008 passed 00:04:43.008 00:04:43.008 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.008 suites 1 1 n/a 0 0 00:04:43.008 tests 2 2 2 0 0 00:04:43.008 asserts 497 497 497 0 n/a 00:04:43.008 00:04:43.008 Elapsed time = 0.659 seconds 00:04:43.008 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.008 EAL: request: mp_malloc_sync 00:04:43.008 EAL: No shared files mode enabled, IPC is disabled 00:04:43.008 EAL: Heap on socket 0 was shrunk by 2MB 00:04:43.008 EAL: No shared files mode enabled, IPC is disabled 00:04:43.008 EAL: No shared files mode enabled, IPC is disabled 00:04:43.008 EAL: No shared files mode enabled, IPC is disabled 00:04:43.008 00:04:43.008 real 0m0.792s 00:04:43.008 user 0m0.418s 00:04:43.008 sys 0m0.336s 00:04:43.008 23:49:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.008 23:49:13 -- common/autotest_common.sh@10 -- # set +x 00:04:43.008 ************************************ 00:04:43.008 END TEST env_vtophys 00:04:43.008 ************************************ 00:04:43.008 23:49:13 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:43.008 23:49:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.008 23:49:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.008 23:49:13 -- common/autotest_common.sh@10 -- # set +x 00:04:43.269 ************************************ 00:04:43.269 START TEST env_pci 00:04:43.269 ************************************ 00:04:43.269 23:49:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:43.269 00:04:43.269 00:04:43.269 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.269 http://cunit.sourceforge.net/ 00:04:43.269 00:04:43.269 00:04:43.269 Suite: pci 00:04:43.269 Test: pci_hook ...[2024-04-26 23:49:13.280142] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 179221 has claimed it 00:04:43.269 EAL: Cannot find device (10000:00:01.0) 00:04:43.269 EAL: Failed to attach device on primary process 00:04:43.269 passed 00:04:43.269 00:04:43.269 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.269 suites 1 1 n/a 0 0 00:04:43.269 tests 1 1 1 0 0 00:04:43.269 asserts 25 25 25 0 n/a 00:04:43.269 00:04:43.269 Elapsed time = 0.030 seconds 00:04:43.269 00:04:43.269 real 0m0.050s 00:04:43.269 user 0m0.025s 00:04:43.269 sys 0m0.025s 00:04:43.269 23:49:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.269 23:49:13 -- common/autotest_common.sh@10 -- # set +x 00:04:43.269 ************************************ 00:04:43.269 END TEST env_pci 00:04:43.269 ************************************ 00:04:43.269 23:49:13 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:43.269 23:49:13 -- env/env.sh@15 -- # uname 00:04:43.269 23:49:13 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:43.269 23:49:13 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:43.269 23:49:13 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.269 23:49:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:43.269 23:49:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.269 23:49:13 -- common/autotest_common.sh@10 -- # set +x 00:04:43.530 ************************************ 00:04:43.531 START TEST env_dpdk_post_init 00:04:43.531 ************************************ 00:04:43.531 23:49:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.531 EAL: Detected CPU lcores: 128 00:04:43.531 EAL: Detected NUMA nodes: 2 00:04:43.531 EAL: Detected shared linkage of DPDK 00:04:43.531 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.531 EAL: Selected IOVA mode 'VA' 00:04:43.531 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.531 EAL: VFIO support initialized 00:04:43.531 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.531 EAL: Using IOMMU type 1 (Type 1) 00:04:43.531 EAL: Ignore mapping IO port bar(1) 00:04:43.791 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:43.791 EAL: Ignore mapping IO port bar(1) 00:04:44.052 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:44.052 EAL: Ignore mapping IO port bar(1) 00:04:44.313 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:44.313 EAL: Ignore mapping IO port bar(1) 00:04:44.313 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:44.575 EAL: Ignore mapping IO port bar(1) 00:04:44.575 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:44.837 EAL: Ignore mapping IO port bar(1) 00:04:44.837 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:45.097 EAL: Ignore mapping IO port bar(1) 00:04:45.097 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:45.097 EAL: Ignore mapping IO port bar(1) 00:04:45.358 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:45.619 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:45.619 EAL: Ignore mapping IO port bar(1) 00:04:45.879 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:45.879 EAL: Ignore mapping IO port bar(1) 00:04:45.879 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:46.139 EAL: Ignore mapping IO port bar(1) 00:04:46.139 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:46.400 EAL: Ignore mapping IO port bar(1) 00:04:46.400 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:46.660 EAL: Ignore mapping IO port bar(1) 00:04:46.660 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:46.660 EAL: Ignore mapping IO port bar(1) 00:04:46.920 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:46.920 EAL: Ignore mapping IO port bar(1) 00:04:47.182 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:47.182 EAL: Ignore mapping IO port bar(1) 00:04:47.443 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:47.443 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:47.443 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:47.443 Starting DPDK initialization... 00:04:47.443 Starting SPDK post initialization... 00:04:47.443 SPDK NVMe probe 00:04:47.443 Attaching to 0000:65:00.0 00:04:47.443 Attached to 0000:65:00.0 00:04:47.443 Cleaning up... 00:04:49.357 00:04:49.357 real 0m5.707s 00:04:49.357 user 0m0.186s 00:04:49.357 sys 0m0.067s 00:04:49.357 23:49:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.357 23:49:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.357 ************************************ 00:04:49.357 END TEST env_dpdk_post_init 00:04:49.357 ************************************ 00:04:49.357 23:49:19 -- env/env.sh@26 -- # uname 00:04:49.357 23:49:19 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:49.357 23:49:19 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.357 23:49:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.357 23:49:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.357 23:49:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.357 ************************************ 00:04:49.357 START TEST env_mem_callbacks 00:04:49.357 ************************************ 00:04:49.357 23:49:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.357 EAL: Detected CPU lcores: 128 00:04:49.357 EAL: Detected NUMA nodes: 2 00:04:49.357 EAL: Detected shared linkage of DPDK 00:04:49.357 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.357 EAL: Selected IOVA mode 'VA' 00:04:49.357 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.357 EAL: VFIO support initialized 00:04:49.357 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.357 00:04:49.357 00:04:49.357 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.357 http://cunit.sourceforge.net/ 00:04:49.357 00:04:49.357 00:04:49.357 Suite: memory 00:04:49.357 Test: test ... 00:04:49.357 register 0x200000200000 2097152 00:04:49.357 malloc 3145728 00:04:49.357 register 0x200000400000 4194304 00:04:49.357 buf 0x200000500000 len 3145728 PASSED 00:04:49.357 malloc 64 00:04:49.357 buf 0x2000004fff40 len 64 PASSED 00:04:49.357 malloc 4194304 00:04:49.357 register 0x200000800000 6291456 00:04:49.357 buf 0x200000a00000 len 4194304 PASSED 00:04:49.357 free 0x200000500000 3145728 00:04:49.357 free 0x2000004fff40 64 00:04:49.357 unregister 0x200000400000 4194304 PASSED 00:04:49.357 free 0x200000a00000 4194304 00:04:49.357 unregister 0x200000800000 6291456 PASSED 00:04:49.357 malloc 8388608 00:04:49.357 register 0x200000400000 10485760 00:04:49.357 buf 0x200000600000 len 8388608 PASSED 00:04:49.357 free 0x200000600000 8388608 00:04:49.357 unregister 0x200000400000 10485760 PASSED 00:04:49.357 passed 00:04:49.357 00:04:49.357 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.357 suites 1 1 n/a 0 0 00:04:49.357 tests 1 1 1 0 0 00:04:49.357 asserts 15 15 15 0 n/a 00:04:49.357 00:04:49.357 Elapsed time = 0.004 seconds 00:04:49.357 00:04:49.357 real 0m0.057s 00:04:49.357 user 0m0.018s 00:04:49.357 sys 0m0.038s 00:04:49.357 23:49:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.357 23:49:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.357 ************************************ 00:04:49.357 END TEST env_mem_callbacks 00:04:49.357 ************************************ 00:04:49.357 00:04:49.357 real 0m7.883s 00:04:49.357 user 0m1.245s 00:04:49.357 sys 0m1.078s 00:04:49.357 23:49:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.357 23:49:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.357 ************************************ 00:04:49.357 END TEST env 00:04:49.357 ************************************ 00:04:49.357 23:49:19 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.357 23:49:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.357 23:49:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.357 23:49:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.618 ************************************ 00:04:49.618 START TEST rpc 00:04:49.618 ************************************ 00:04:49.618 23:49:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.618 * Looking for test storage... 00:04:49.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.618 23:49:19 -- rpc/rpc.sh@65 -- # spdk_pid=180692 00:04:49.618 23:49:19 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.618 23:49:19 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:49.618 23:49:19 -- rpc/rpc.sh@67 -- # waitforlisten 180692 00:04:49.618 23:49:19 -- common/autotest_common.sh@817 -- # '[' -z 180692 ']' 00:04:49.618 23:49:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.618 23:49:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:49.618 23:49:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.618 23:49:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:49.618 23:49:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.879 [2024-04-26 23:49:19.848586] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:04:49.879 [2024-04-26 23:49:19.848638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180692 ] 00:04:49.879 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.879 [2024-04-26 23:49:19.910628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.879 [2024-04-26 23:49:19.978759] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.879 [2024-04-26 23:49:19.978796] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 180692' to capture a snapshot of events at runtime. 00:04:49.879 [2024-04-26 23:49:19.978803] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.879 [2024-04-26 23:49:19.978810] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.879 [2024-04-26 23:49:19.978815] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid180692 for offline analysis/debug. 00:04:49.879 [2024-04-26 23:49:19.978844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.448 23:49:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:50.448 23:49:20 -- common/autotest_common.sh@850 -- # return 0 00:04:50.448 23:49:20 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.448 23:49:20 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.448 23:49:20 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:50.448 23:49:20 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:50.448 23:49:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.448 23:49:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.448 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.709 ************************************ 00:04:50.709 START TEST rpc_integrity 00:04:50.709 ************************************ 00:04:50.709 23:49:20 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:50.709 23:49:20 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.709 23:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.709 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.709 23:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.709 23:49:20 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.709 23:49:20 -- rpc/rpc.sh@13 -- # jq length 00:04:50.709 23:49:20 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.709 23:49:20 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.709 23:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.709 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.709 23:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.709 23:49:20 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:50.709 23:49:20 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.709 23:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.709 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.709 23:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.709 23:49:20 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.709 { 00:04:50.709 "name": "Malloc0", 00:04:50.709 "aliases": [ 00:04:50.709 "8da0376c-1c00-471c-98bb-4894f764c653" 00:04:50.709 ], 00:04:50.709 "product_name": "Malloc disk", 00:04:50.709 "block_size": 512, 00:04:50.709 "num_blocks": 16384, 00:04:50.709 "uuid": "8da0376c-1c00-471c-98bb-4894f764c653", 00:04:50.709 "assigned_rate_limits": { 00:04:50.709 "rw_ios_per_sec": 0, 00:04:50.709 "rw_mbytes_per_sec": 0, 00:04:50.709 "r_mbytes_per_sec": 0, 00:04:50.709 "w_mbytes_per_sec": 0 00:04:50.709 }, 00:04:50.709 "claimed": false, 00:04:50.709 "zoned": false, 00:04:50.709 "supported_io_types": { 00:04:50.709 "read": true, 00:04:50.709 "write": true, 00:04:50.709 "unmap": true, 00:04:50.709 "write_zeroes": true, 00:04:50.709 "flush": true, 00:04:50.709 "reset": true, 00:04:50.709 "compare": false, 00:04:50.709 "compare_and_write": false, 00:04:50.709 "abort": true, 00:04:50.709 "nvme_admin": false, 00:04:50.709 "nvme_io": false 00:04:50.709 }, 00:04:50.709 "memory_domains": [ 00:04:50.709 { 00:04:50.709 "dma_device_id": "system", 00:04:50.709 "dma_device_type": 1 00:04:50.709 }, 00:04:50.709 { 00:04:50.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.709 "dma_device_type": 2 00:04:50.709 } 00:04:50.709 ], 00:04:50.709 "driver_specific": {} 00:04:50.709 } 00:04:50.709 ]' 00:04:50.709 23:49:20 -- rpc/rpc.sh@17 -- # jq length 00:04:50.709 23:49:20 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.709 23:49:20 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.709 23:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.709 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.709 [2024-04-26 23:49:20.894428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.709 [2024-04-26 23:49:20.894460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.709 [2024-04-26 23:49:20.894471] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x181ab40 00:04:50.709 [2024-04-26 23:49:20.894478] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.709 [2024-04-26 23:49:20.895814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.709 [2024-04-26 23:49:20.895835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.709 Passthru0 00:04:50.709 23:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.709 23:49:20 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.709 23:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.709 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.709 23:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.709 23:49:20 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.709 { 00:04:50.709 "name": "Malloc0", 00:04:50.709 "aliases": [ 00:04:50.709 "8da0376c-1c00-471c-98bb-4894f764c653" 00:04:50.709 ], 00:04:50.709 "product_name": "Malloc disk", 00:04:50.709 "block_size": 512, 00:04:50.709 "num_blocks": 16384, 00:04:50.709 "uuid": "8da0376c-1c00-471c-98bb-4894f764c653", 00:04:50.709 "assigned_rate_limits": { 00:04:50.709 "rw_ios_per_sec": 0, 00:04:50.709 "rw_mbytes_per_sec": 0, 00:04:50.709 "r_mbytes_per_sec": 0, 00:04:50.709 "w_mbytes_per_sec": 0 00:04:50.709 }, 00:04:50.709 "claimed": true, 00:04:50.709 "claim_type": "exclusive_write", 00:04:50.709 "zoned": false, 00:04:50.709 "supported_io_types": { 00:04:50.709 "read": true, 00:04:50.709 "write": true, 00:04:50.709 "unmap": true, 00:04:50.709 "write_zeroes": true, 00:04:50.709 "flush": true, 00:04:50.709 "reset": true, 00:04:50.709 "compare": false, 00:04:50.709 "compare_and_write": false, 00:04:50.709 "abort": true, 00:04:50.709 "nvme_admin": false, 00:04:50.709 "nvme_io": false 00:04:50.709 }, 00:04:50.709 "memory_domains": [ 00:04:50.709 { 00:04:50.709 "dma_device_id": "system", 00:04:50.709 "dma_device_type": 1 00:04:50.709 }, 00:04:50.709 { 00:04:50.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.709 "dma_device_type": 2 00:04:50.709 } 00:04:50.709 ], 00:04:50.709 "driver_specific": {} 00:04:50.709 }, 00:04:50.709 { 00:04:50.709 "name": "Passthru0", 00:04:50.709 "aliases": [ 00:04:50.709 "9af55cd0-07ca-5039-8f21-3cdbe419ada4" 00:04:50.709 ], 00:04:50.709 "product_name": "passthru", 00:04:50.709 "block_size": 512, 00:04:50.709 "num_blocks": 16384, 00:04:50.709 "uuid": "9af55cd0-07ca-5039-8f21-3cdbe419ada4", 00:04:50.709 "assigned_rate_limits": { 00:04:50.709 "rw_ios_per_sec": 0, 00:04:50.709 "rw_mbytes_per_sec": 0, 00:04:50.709 "r_mbytes_per_sec": 0, 00:04:50.709 "w_mbytes_per_sec": 0 00:04:50.709 }, 00:04:50.709 "claimed": false, 00:04:50.709 "zoned": false, 00:04:50.709 "supported_io_types": { 00:04:50.709 "read": true, 00:04:50.709 "write": true, 00:04:50.709 "unmap": true, 00:04:50.709 "write_zeroes": true, 00:04:50.709 "flush": true, 00:04:50.709 "reset": true, 00:04:50.709 "compare": false, 00:04:50.709 "compare_and_write": false, 00:04:50.709 "abort": true, 00:04:50.709 "nvme_admin": false, 00:04:50.709 "nvme_io": false 00:04:50.709 }, 00:04:50.709 "memory_domains": [ 00:04:50.709 { 00:04:50.709 "dma_device_id": "system", 00:04:50.709 "dma_device_type": 1 00:04:50.709 }, 00:04:50.709 { 00:04:50.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.709 "dma_device_type": 2 00:04:50.709 } 00:04:50.709 ], 00:04:50.709 "driver_specific": { 00:04:50.709 "passthru": { 00:04:50.709 "name": "Passthru0", 00:04:50.709 "base_bdev_name": "Malloc0" 00:04:50.709 } 00:04:50.709 } 00:04:50.709 } 00:04:50.709 ]' 00:04:50.709 23:49:20 -- rpc/rpc.sh@21 -- # jq length 00:04:50.969 23:49:20 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.969 23:49:20 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.969 23:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.969 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.969 23:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.969 23:49:20 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:50.969 23:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.969 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.969 23:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.969 23:49:20 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.969 23:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.969 23:49:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.969 23:49:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.969 23:49:20 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.969 23:49:21 -- rpc/rpc.sh@26 -- # jq length 00:04:50.969 23:49:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.969 00:04:50.969 real 0m0.295s 00:04:50.969 user 0m0.191s 00:04:50.969 sys 0m0.038s 00:04:50.969 23:49:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.969 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:50.970 ************************************ 00:04:50.970 END TEST rpc_integrity 00:04:50.970 ************************************ 00:04:50.970 23:49:21 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:50.970 23:49:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.970 23:49:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.970 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.229 ************************************ 00:04:51.229 START TEST rpc_plugins 00:04:51.229 ************************************ 00:04:51.229 23:49:21 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:51.229 23:49:21 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:51.229 23:49:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.229 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.229 23:49:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.229 23:49:21 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:51.229 23:49:21 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:51.229 23:49:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.229 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.229 23:49:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.229 23:49:21 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:51.229 { 00:04:51.229 "name": "Malloc1", 00:04:51.229 "aliases": [ 00:04:51.229 "fd5011c8-0222-4a6a-956b-d56372e9e2b1" 00:04:51.229 ], 00:04:51.229 "product_name": "Malloc disk", 00:04:51.229 "block_size": 4096, 00:04:51.229 "num_blocks": 256, 00:04:51.229 "uuid": "fd5011c8-0222-4a6a-956b-d56372e9e2b1", 00:04:51.229 "assigned_rate_limits": { 00:04:51.229 "rw_ios_per_sec": 0, 00:04:51.229 "rw_mbytes_per_sec": 0, 00:04:51.229 "r_mbytes_per_sec": 0, 00:04:51.229 "w_mbytes_per_sec": 0 00:04:51.229 }, 00:04:51.229 "claimed": false, 00:04:51.229 "zoned": false, 00:04:51.229 "supported_io_types": { 00:04:51.229 "read": true, 00:04:51.229 "write": true, 00:04:51.229 "unmap": true, 00:04:51.229 "write_zeroes": true, 00:04:51.229 "flush": true, 00:04:51.229 "reset": true, 00:04:51.229 "compare": false, 00:04:51.229 "compare_and_write": false, 00:04:51.229 "abort": true, 00:04:51.229 "nvme_admin": false, 00:04:51.229 "nvme_io": false 00:04:51.229 }, 00:04:51.229 "memory_domains": [ 00:04:51.229 { 00:04:51.229 "dma_device_id": "system", 00:04:51.229 "dma_device_type": 1 00:04:51.229 }, 00:04:51.229 { 00:04:51.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.230 "dma_device_type": 2 00:04:51.230 } 00:04:51.230 ], 00:04:51.230 "driver_specific": {} 00:04:51.230 } 00:04:51.230 ]' 00:04:51.230 23:49:21 -- rpc/rpc.sh@32 -- # jq length 00:04:51.230 23:49:21 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:51.230 23:49:21 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:51.230 23:49:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.230 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.230 23:49:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.230 23:49:21 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:51.230 23:49:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.230 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.230 23:49:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.230 23:49:21 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:51.230 23:49:21 -- rpc/rpc.sh@36 -- # jq length 00:04:51.230 23:49:21 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:51.230 00:04:51.230 real 0m0.148s 00:04:51.230 user 0m0.096s 00:04:51.230 sys 0m0.017s 00:04:51.230 23:49:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.230 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.230 ************************************ 00:04:51.230 END TEST rpc_plugins 00:04:51.230 ************************************ 00:04:51.230 23:49:21 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:51.230 23:49:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.230 23:49:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.230 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.490 ************************************ 00:04:51.490 START TEST rpc_trace_cmd_test 00:04:51.490 ************************************ 00:04:51.490 23:49:21 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:51.490 23:49:21 -- rpc/rpc.sh@40 -- # local info 00:04:51.490 23:49:21 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:51.490 23:49:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.490 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.490 23:49:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.490 23:49:21 -- rpc/rpc.sh@42 -- # info='{ 00:04:51.490 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid180692", 00:04:51.490 "tpoint_group_mask": "0x8", 00:04:51.490 "iscsi_conn": { 00:04:51.490 "mask": "0x2", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "scsi": { 00:04:51.490 "mask": "0x4", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "bdev": { 00:04:51.490 "mask": "0x8", 00:04:51.490 "tpoint_mask": "0xffffffffffffffff" 00:04:51.490 }, 00:04:51.490 "nvmf_rdma": { 00:04:51.490 "mask": "0x10", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "nvmf_tcp": { 00:04:51.490 "mask": "0x20", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "ftl": { 00:04:51.490 "mask": "0x40", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "blobfs": { 00:04:51.490 "mask": "0x80", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "dsa": { 00:04:51.490 "mask": "0x200", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "thread": { 00:04:51.490 "mask": "0x400", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "nvme_pcie": { 00:04:51.490 "mask": "0x800", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "iaa": { 00:04:51.490 "mask": "0x1000", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "nvme_tcp": { 00:04:51.490 "mask": "0x2000", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "bdev_nvme": { 00:04:51.490 "mask": "0x4000", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 }, 00:04:51.490 "sock": { 00:04:51.490 "mask": "0x8000", 00:04:51.490 "tpoint_mask": "0x0" 00:04:51.490 } 00:04:51.490 }' 00:04:51.490 23:49:21 -- rpc/rpc.sh@43 -- # jq length 00:04:51.490 23:49:21 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:51.490 23:49:21 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:51.490 23:49:21 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:51.490 23:49:21 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:51.751 23:49:21 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:51.751 23:49:21 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:51.751 23:49:21 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:51.751 23:49:21 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:51.751 23:49:21 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:51.751 00:04:51.751 real 0m0.249s 00:04:51.751 user 0m0.214s 00:04:51.751 sys 0m0.027s 00:04:51.751 23:49:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.751 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.751 ************************************ 00:04:51.751 END TEST rpc_trace_cmd_test 00:04:51.751 ************************************ 00:04:51.751 23:49:21 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:51.751 23:49:21 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:51.751 23:49:21 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:51.751 23:49:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.751 23:49:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.751 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:52.012 ************************************ 00:04:52.012 START TEST rpc_daemon_integrity 00:04:52.012 ************************************ 00:04:52.012 23:49:21 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:52.012 23:49:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.012 23:49:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.012 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:04:52.012 23:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.012 23:49:22 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.012 23:49:22 -- rpc/rpc.sh@13 -- # jq length 00:04:52.012 23:49:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.012 23:49:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.012 23:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.012 23:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.012 23:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.012 23:49:22 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:52.012 23:49:22 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.012 23:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.012 23:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.012 23:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.012 23:49:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.012 { 00:04:52.012 "name": "Malloc2", 00:04:52.012 "aliases": [ 00:04:52.012 "90d52cbc-42ac-4573-9ab4-508f64260711" 00:04:52.012 ], 00:04:52.012 "product_name": "Malloc disk", 00:04:52.012 "block_size": 512, 00:04:52.012 "num_blocks": 16384, 00:04:52.012 "uuid": "90d52cbc-42ac-4573-9ab4-508f64260711", 00:04:52.012 "assigned_rate_limits": { 00:04:52.012 "rw_ios_per_sec": 0, 00:04:52.012 "rw_mbytes_per_sec": 0, 00:04:52.012 "r_mbytes_per_sec": 0, 00:04:52.012 "w_mbytes_per_sec": 0 00:04:52.012 }, 00:04:52.012 "claimed": false, 00:04:52.012 "zoned": false, 00:04:52.012 "supported_io_types": { 00:04:52.012 "read": true, 00:04:52.012 "write": true, 00:04:52.012 "unmap": true, 00:04:52.012 "write_zeroes": true, 00:04:52.012 "flush": true, 00:04:52.012 "reset": true, 00:04:52.012 "compare": false, 00:04:52.012 "compare_and_write": false, 00:04:52.012 "abort": true, 00:04:52.012 "nvme_admin": false, 00:04:52.012 "nvme_io": false 00:04:52.012 }, 00:04:52.012 "memory_domains": [ 00:04:52.012 { 00:04:52.012 "dma_device_id": "system", 00:04:52.012 "dma_device_type": 1 00:04:52.012 }, 00:04:52.012 { 00:04:52.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.012 "dma_device_type": 2 00:04:52.012 } 00:04:52.012 ], 00:04:52.012 "driver_specific": {} 00:04:52.012 } 00:04:52.012 ]' 00:04:52.012 23:49:22 -- rpc/rpc.sh@17 -- # jq length 00:04:52.012 23:49:22 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.012 23:49:22 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:52.012 23:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.012 23:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.012 [2024-04-26 23:49:22.133792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:52.012 [2024-04-26 23:49:22.133820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.012 [2024-04-26 23:49:22.133835] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x181a650 00:04:52.012 [2024-04-26 23:49:22.133847] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.012 [2024-04-26 23:49:22.135070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.012 [2024-04-26 23:49:22.135092] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.012 Passthru0 00:04:52.012 23:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.012 23:49:22 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.012 23:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.012 23:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.012 23:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.012 23:49:22 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.012 { 00:04:52.012 "name": "Malloc2", 00:04:52.012 "aliases": [ 00:04:52.012 "90d52cbc-42ac-4573-9ab4-508f64260711" 00:04:52.012 ], 00:04:52.012 "product_name": "Malloc disk", 00:04:52.012 "block_size": 512, 00:04:52.012 "num_blocks": 16384, 00:04:52.012 "uuid": "90d52cbc-42ac-4573-9ab4-508f64260711", 00:04:52.012 "assigned_rate_limits": { 00:04:52.012 "rw_ios_per_sec": 0, 00:04:52.012 "rw_mbytes_per_sec": 0, 00:04:52.012 "r_mbytes_per_sec": 0, 00:04:52.012 "w_mbytes_per_sec": 0 00:04:52.012 }, 00:04:52.012 "claimed": true, 00:04:52.012 "claim_type": "exclusive_write", 00:04:52.012 "zoned": false, 00:04:52.012 "supported_io_types": { 00:04:52.012 "read": true, 00:04:52.012 "write": true, 00:04:52.012 "unmap": true, 00:04:52.012 "write_zeroes": true, 00:04:52.012 "flush": true, 00:04:52.012 "reset": true, 00:04:52.012 "compare": false, 00:04:52.012 "compare_and_write": false, 00:04:52.012 "abort": true, 00:04:52.012 "nvme_admin": false, 00:04:52.012 "nvme_io": false 00:04:52.012 }, 00:04:52.012 "memory_domains": [ 00:04:52.012 { 00:04:52.012 "dma_device_id": "system", 00:04:52.012 "dma_device_type": 1 00:04:52.012 }, 00:04:52.012 { 00:04:52.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.012 "dma_device_type": 2 00:04:52.012 } 00:04:52.012 ], 00:04:52.012 "driver_specific": {} 00:04:52.012 }, 00:04:52.012 { 00:04:52.012 "name": "Passthru0", 00:04:52.012 "aliases": [ 00:04:52.012 "d223168a-4a81-5088-a5dd-f4c001837152" 00:04:52.012 ], 00:04:52.012 "product_name": "passthru", 00:04:52.012 "block_size": 512, 00:04:52.012 "num_blocks": 16384, 00:04:52.012 "uuid": "d223168a-4a81-5088-a5dd-f4c001837152", 00:04:52.012 "assigned_rate_limits": { 00:04:52.012 "rw_ios_per_sec": 0, 00:04:52.012 "rw_mbytes_per_sec": 0, 00:04:52.012 "r_mbytes_per_sec": 0, 00:04:52.012 "w_mbytes_per_sec": 0 00:04:52.012 }, 00:04:52.012 "claimed": false, 00:04:52.012 "zoned": false, 00:04:52.012 "supported_io_types": { 00:04:52.012 "read": true, 00:04:52.012 "write": true, 00:04:52.012 "unmap": true, 00:04:52.012 "write_zeroes": true, 00:04:52.012 "flush": true, 00:04:52.012 "reset": true, 00:04:52.012 "compare": false, 00:04:52.012 "compare_and_write": false, 00:04:52.012 "abort": true, 00:04:52.012 "nvme_admin": false, 00:04:52.012 "nvme_io": false 00:04:52.012 }, 00:04:52.012 "memory_domains": [ 00:04:52.012 { 00:04:52.012 "dma_device_id": "system", 00:04:52.012 "dma_device_type": 1 00:04:52.012 }, 00:04:52.012 { 00:04:52.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.012 "dma_device_type": 2 00:04:52.012 } 00:04:52.012 ], 00:04:52.012 "driver_specific": { 00:04:52.012 "passthru": { 00:04:52.012 "name": "Passthru0", 00:04:52.012 "base_bdev_name": "Malloc2" 00:04:52.012 } 00:04:52.012 } 00:04:52.012 } 00:04:52.012 ]' 00:04:52.012 23:49:22 -- rpc/rpc.sh@21 -- # jq length 00:04:52.012 23:49:22 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.012 23:49:22 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.012 23:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.012 23:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.012 23:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.012 23:49:22 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:52.012 23:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.013 23:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.013 23:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.013 23:49:22 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.013 23:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.013 23:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.273 23:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.274 23:49:22 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.274 23:49:22 -- rpc/rpc.sh@26 -- # jq length 00:04:52.274 23:49:22 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.274 00:04:52.274 real 0m0.288s 00:04:52.274 user 0m0.185s 00:04:52.274 sys 0m0.041s 00:04:52.274 23:49:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.274 23:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.274 ************************************ 00:04:52.274 END TEST rpc_daemon_integrity 00:04:52.274 ************************************ 00:04:52.274 23:49:22 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:52.274 23:49:22 -- rpc/rpc.sh@84 -- # killprocess 180692 00:04:52.274 23:49:22 -- common/autotest_common.sh@936 -- # '[' -z 180692 ']' 00:04:52.274 23:49:22 -- common/autotest_common.sh@940 -- # kill -0 180692 00:04:52.274 23:49:22 -- common/autotest_common.sh@941 -- # uname 00:04:52.274 23:49:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:52.274 23:49:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 180692 00:04:52.274 23:49:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:52.274 23:49:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:52.274 23:49:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 180692' 00:04:52.274 killing process with pid 180692 00:04:52.274 23:49:22 -- common/autotest_common.sh@955 -- # kill 180692 00:04:52.274 23:49:22 -- common/autotest_common.sh@960 -- # wait 180692 00:04:52.535 00:04:52.535 real 0m2.884s 00:04:52.535 user 0m3.830s 00:04:52.535 sys 0m0.869s 00:04:52.535 23:49:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.535 23:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.535 ************************************ 00:04:52.535 END TEST rpc 00:04:52.535 ************************************ 00:04:52.535 23:49:22 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.535 23:49:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.535 23:49:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.535 23:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.796 ************************************ 00:04:52.796 START TEST skip_rpc 00:04:52.796 ************************************ 00:04:52.796 23:49:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.796 * Looking for test storage... 00:04:52.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.796 23:49:22 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.796 23:49:22 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.797 23:49:22 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:52.797 23:49:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.797 23:49:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.797 23:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.797 ************************************ 00:04:52.797 START TEST skip_rpc 00:04:52.797 ************************************ 00:04:52.797 23:49:22 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:52.797 23:49:22 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=181579 00:04:52.797 23:49:22 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.797 23:49:22 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:52.797 23:49:22 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:53.057 [2024-04-26 23:49:23.052125] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:04:53.057 [2024-04-26 23:49:23.052171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181579 ] 00:04:53.057 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.057 [2024-04-26 23:49:23.111786] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.057 [2024-04-26 23:49:23.175511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.342 23:49:28 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:58.342 23:49:28 -- common/autotest_common.sh@638 -- # local es=0 00:04:58.342 23:49:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:58.342 23:49:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:58.342 23:49:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:58.342 23:49:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:58.342 23:49:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:58.342 23:49:28 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:58.342 23:49:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.342 23:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:58.342 23:49:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:58.342 23:49:28 -- common/autotest_common.sh@641 -- # es=1 00:04:58.342 23:49:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:58.342 23:49:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:58.342 23:49:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:58.342 23:49:28 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:58.342 23:49:28 -- rpc/skip_rpc.sh@23 -- # killprocess 181579 00:04:58.342 23:49:28 -- common/autotest_common.sh@936 -- # '[' -z 181579 ']' 00:04:58.342 23:49:28 -- common/autotest_common.sh@940 -- # kill -0 181579 00:04:58.343 23:49:28 -- common/autotest_common.sh@941 -- # uname 00:04:58.343 23:49:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.343 23:49:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 181579 00:04:58.343 23:49:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:58.343 23:49:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:58.343 23:49:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 181579' 00:04:58.343 killing process with pid 181579 00:04:58.343 23:49:28 -- common/autotest_common.sh@955 -- # kill 181579 00:04:58.343 23:49:28 -- common/autotest_common.sh@960 -- # wait 181579 00:04:58.343 00:04:58.343 real 0m5.274s 00:04:58.343 user 0m5.088s 00:04:58.343 sys 0m0.223s 00:04:58.343 23:49:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.343 23:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:58.343 ************************************ 00:04:58.343 END TEST skip_rpc 00:04:58.343 ************************************ 00:04:58.343 23:49:28 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:58.343 23:49:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.343 23:49:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.343 23:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:58.343 ************************************ 00:04:58.343 START TEST skip_rpc_with_json 00:04:58.343 ************************************ 00:04:58.343 23:49:28 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:58.343 23:49:28 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:58.343 23:49:28 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=182623 00:04:58.343 23:49:28 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.343 23:49:28 -- rpc/skip_rpc.sh@31 -- # waitforlisten 182623 00:04:58.343 23:49:28 -- common/autotest_common.sh@817 -- # '[' -z 182623 ']' 00:04:58.343 23:49:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.343 23:49:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:58.343 23:49:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.343 23:49:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:58.343 23:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:58.343 23:49:28 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.343 [2024-04-26 23:49:28.510982] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:04:58.343 [2024-04-26 23:49:28.511036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182623 ] 00:04:58.343 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.605 [2024-04-26 23:49:28.574775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.605 [2024-04-26 23:49:28.650098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.174 23:49:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:59.174 23:49:29 -- common/autotest_common.sh@850 -- # return 0 00:04:59.174 23:49:29 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:59.174 23:49:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.174 23:49:29 -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 [2024-04-26 23:49:29.271096] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:59.174 request: 00:04:59.174 { 00:04:59.174 "trtype": "tcp", 00:04:59.174 "method": "nvmf_get_transports", 00:04:59.174 "req_id": 1 00:04:59.174 } 00:04:59.174 Got JSON-RPC error response 00:04:59.174 response: 00:04:59.174 { 00:04:59.174 "code": -19, 00:04:59.174 "message": "No such device" 00:04:59.174 } 00:04:59.174 23:49:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:59.174 23:49:29 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:59.174 23:49:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.174 23:49:29 -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 [2024-04-26 23:49:29.279196] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.174 23:49:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.174 23:49:29 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:59.174 23:49:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.174 23:49:29 -- common/autotest_common.sh@10 -- # set +x 00:04:59.433 23:49:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.433 23:49:29 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.433 { 00:04:59.433 "subsystems": [ 00:04:59.433 { 00:04:59.433 "subsystem": "vfio_user_target", 00:04:59.433 "config": null 00:04:59.433 }, 00:04:59.433 { 00:04:59.433 "subsystem": "keyring", 00:04:59.433 "config": [] 00:04:59.433 }, 00:04:59.433 { 00:04:59.433 "subsystem": "iobuf", 00:04:59.433 "config": [ 00:04:59.433 { 00:04:59.433 "method": "iobuf_set_options", 00:04:59.433 "params": { 00:04:59.433 "small_pool_count": 8192, 00:04:59.433 "large_pool_count": 1024, 00:04:59.433 "small_bufsize": 8192, 00:04:59.433 "large_bufsize": 135168 00:04:59.433 } 00:04:59.433 } 00:04:59.433 ] 00:04:59.433 }, 00:04:59.433 { 00:04:59.433 "subsystem": "sock", 00:04:59.433 "config": [ 00:04:59.433 { 00:04:59.433 "method": "sock_impl_set_options", 00:04:59.433 "params": { 00:04:59.433 "impl_name": "posix", 00:04:59.433 "recv_buf_size": 2097152, 00:04:59.433 "send_buf_size": 2097152, 00:04:59.433 "enable_recv_pipe": true, 00:04:59.433 "enable_quickack": false, 00:04:59.433 "enable_placement_id": 0, 00:04:59.433 "enable_zerocopy_send_server": true, 00:04:59.433 "enable_zerocopy_send_client": false, 00:04:59.433 "zerocopy_threshold": 0, 00:04:59.433 "tls_version": 0, 00:04:59.433 "enable_ktls": false 00:04:59.433 } 00:04:59.433 }, 00:04:59.433 { 00:04:59.433 "method": "sock_impl_set_options", 00:04:59.433 "params": { 00:04:59.433 "impl_name": "ssl", 00:04:59.433 "recv_buf_size": 4096, 00:04:59.433 "send_buf_size": 4096, 00:04:59.433 "enable_recv_pipe": true, 00:04:59.433 "enable_quickack": false, 00:04:59.433 "enable_placement_id": 0, 00:04:59.433 "enable_zerocopy_send_server": true, 00:04:59.433 "enable_zerocopy_send_client": false, 00:04:59.433 "zerocopy_threshold": 0, 00:04:59.433 "tls_version": 0, 00:04:59.433 "enable_ktls": false 00:04:59.433 } 00:04:59.433 } 00:04:59.433 ] 00:04:59.433 }, 00:04:59.433 { 00:04:59.433 "subsystem": "vmd", 00:04:59.433 "config": [] 00:04:59.433 }, 00:04:59.433 { 00:04:59.434 "subsystem": "accel", 00:04:59.434 "config": [ 00:04:59.434 { 00:04:59.434 "method": "accel_set_options", 00:04:59.434 "params": { 00:04:59.434 "small_cache_size": 128, 00:04:59.434 "large_cache_size": 16, 00:04:59.434 "task_count": 2048, 00:04:59.434 "sequence_count": 2048, 00:04:59.434 "buf_count": 2048 00:04:59.434 } 00:04:59.434 } 00:04:59.434 ] 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "subsystem": "bdev", 00:04:59.434 "config": [ 00:04:59.434 { 00:04:59.434 "method": "bdev_set_options", 00:04:59.434 "params": { 00:04:59.434 "bdev_io_pool_size": 65535, 00:04:59.434 "bdev_io_cache_size": 256, 00:04:59.434 "bdev_auto_examine": true, 00:04:59.434 "iobuf_small_cache_size": 128, 00:04:59.434 "iobuf_large_cache_size": 16 00:04:59.434 } 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "method": "bdev_raid_set_options", 00:04:59.434 "params": { 00:04:59.434 "process_window_size_kb": 1024 00:04:59.434 } 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "method": "bdev_iscsi_set_options", 00:04:59.434 "params": { 00:04:59.434 "timeout_sec": 30 00:04:59.434 } 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "method": "bdev_nvme_set_options", 00:04:59.434 "params": { 00:04:59.434 "action_on_timeout": "none", 00:04:59.434 "timeout_us": 0, 00:04:59.434 "timeout_admin_us": 0, 00:04:59.434 "keep_alive_timeout_ms": 10000, 00:04:59.434 "arbitration_burst": 0, 00:04:59.434 "low_priority_weight": 0, 00:04:59.434 "medium_priority_weight": 0, 00:04:59.434 "high_priority_weight": 0, 00:04:59.434 "nvme_adminq_poll_period_us": 10000, 00:04:59.434 "nvme_ioq_poll_period_us": 0, 00:04:59.434 "io_queue_requests": 0, 00:04:59.434 "delay_cmd_submit": true, 00:04:59.434 "transport_retry_count": 4, 00:04:59.434 "bdev_retry_count": 3, 00:04:59.434 "transport_ack_timeout": 0, 00:04:59.434 "ctrlr_loss_timeout_sec": 0, 00:04:59.434 "reconnect_delay_sec": 0, 00:04:59.434 "fast_io_fail_timeout_sec": 0, 00:04:59.434 "disable_auto_failback": false, 00:04:59.434 "generate_uuids": false, 00:04:59.434 "transport_tos": 0, 00:04:59.434 "nvme_error_stat": false, 00:04:59.434 "rdma_srq_size": 0, 00:04:59.434 "io_path_stat": false, 00:04:59.434 "allow_accel_sequence": false, 00:04:59.434 "rdma_max_cq_size": 0, 00:04:59.434 "rdma_cm_event_timeout_ms": 0, 00:04:59.434 "dhchap_digests": [ 00:04:59.434 "sha256", 00:04:59.434 "sha384", 00:04:59.434 "sha512" 00:04:59.434 ], 00:04:59.434 "dhchap_dhgroups": [ 00:04:59.434 "null", 00:04:59.434 "ffdhe2048", 00:04:59.434 "ffdhe3072", 00:04:59.434 "ffdhe4096", 00:04:59.434 "ffdhe6144", 00:04:59.434 "ffdhe8192" 00:04:59.434 ] 00:04:59.434 } 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "method": "bdev_nvme_set_hotplug", 00:04:59.434 "params": { 00:04:59.434 "period_us": 100000, 00:04:59.434 "enable": false 00:04:59.434 } 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "method": "bdev_wait_for_examine" 00:04:59.434 } 00:04:59.434 ] 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "subsystem": "scsi", 00:04:59.434 "config": null 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "subsystem": "scheduler", 00:04:59.434 "config": [ 00:04:59.434 { 00:04:59.434 "method": "framework_set_scheduler", 00:04:59.434 "params": { 00:04:59.434 "name": "static" 00:04:59.434 } 00:04:59.434 } 00:04:59.434 ] 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "subsystem": "vhost_scsi", 00:04:59.434 "config": [] 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "subsystem": "vhost_blk", 00:04:59.434 "config": [] 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "subsystem": "ublk", 00:04:59.434 "config": [] 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "subsystem": "nbd", 00:04:59.434 "config": [] 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "subsystem": "nvmf", 00:04:59.434 "config": [ 00:04:59.434 { 00:04:59.434 "method": "nvmf_set_config", 00:04:59.434 "params": { 00:04:59.434 "discovery_filter": "match_any", 00:04:59.434 "admin_cmd_passthru": { 00:04:59.434 "identify_ctrlr": false 00:04:59.434 } 00:04:59.434 } 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "method": "nvmf_set_max_subsystems", 00:04:59.434 "params": { 00:04:59.434 "max_subsystems": 1024 00:04:59.434 } 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "method": "nvmf_set_crdt", 00:04:59.434 "params": { 00:04:59.434 "crdt1": 0, 00:04:59.434 "crdt2": 0, 00:04:59.434 "crdt3": 0 00:04:59.434 } 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "method": "nvmf_create_transport", 00:04:59.434 "params": { 00:04:59.434 "trtype": "TCP", 00:04:59.434 "max_queue_depth": 128, 00:04:59.434 "max_io_qpairs_per_ctrlr": 127, 00:04:59.434 "in_capsule_data_size": 4096, 00:04:59.434 "max_io_size": 131072, 00:04:59.434 "io_unit_size": 131072, 00:04:59.434 "max_aq_depth": 128, 00:04:59.434 "num_shared_buffers": 511, 00:04:59.434 "buf_cache_size": 4294967295, 00:04:59.434 "dif_insert_or_strip": false, 00:04:59.434 "zcopy": false, 00:04:59.434 "c2h_success": true, 00:04:59.434 "sock_priority": 0, 00:04:59.434 "abort_timeout_sec": 1, 00:04:59.434 "ack_timeout": 0, 00:04:59.434 "data_wr_pool_size": 0 00:04:59.434 } 00:04:59.434 } 00:04:59.434 ] 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "subsystem": "iscsi", 00:04:59.434 "config": [ 00:04:59.434 { 00:04:59.434 "method": "iscsi_set_options", 00:04:59.434 "params": { 00:04:59.434 "node_base": "iqn.2016-06.io.spdk", 00:04:59.434 "max_sessions": 128, 00:04:59.434 "max_connections_per_session": 2, 00:04:59.434 "max_queue_depth": 64, 00:04:59.434 "default_time2wait": 2, 00:04:59.434 "default_time2retain": 20, 00:04:59.434 "first_burst_length": 8192, 00:04:59.434 "immediate_data": true, 00:04:59.434 "allow_duplicated_isid": false, 00:04:59.434 "error_recovery_level": 0, 00:04:59.434 "nop_timeout": 60, 00:04:59.434 "nop_in_interval": 30, 00:04:59.434 "disable_chap": false, 00:04:59.434 "require_chap": false, 00:04:59.434 "mutual_chap": false, 00:04:59.434 "chap_group": 0, 00:04:59.434 "max_large_datain_per_connection": 64, 00:04:59.434 "max_r2t_per_connection": 4, 00:04:59.434 "pdu_pool_size": 36864, 00:04:59.434 "immediate_data_pool_size": 16384, 00:04:59.434 "data_out_pool_size": 2048 00:04:59.434 } 00:04:59.434 } 00:04:59.434 ] 00:04:59.434 } 00:04:59.434 ] 00:04:59.434 } 00:04:59.434 23:49:29 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:59.434 23:49:29 -- rpc/skip_rpc.sh@40 -- # killprocess 182623 00:04:59.434 23:49:29 -- common/autotest_common.sh@936 -- # '[' -z 182623 ']' 00:04:59.434 23:49:29 -- common/autotest_common.sh@940 -- # kill -0 182623 00:04:59.434 23:49:29 -- common/autotest_common.sh@941 -- # uname 00:04:59.434 23:49:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:59.434 23:49:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 182623 00:04:59.434 23:49:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.434 23:49:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.434 23:49:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 182623' 00:04:59.434 killing process with pid 182623 00:04:59.434 23:49:29 -- common/autotest_common.sh@955 -- # kill 182623 00:04:59.434 23:49:29 -- common/autotest_common.sh@960 -- # wait 182623 00:04:59.693 23:49:29 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=182967 00:04:59.693 23:49:29 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:59.693 23:49:29 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.990 23:49:34 -- rpc/skip_rpc.sh@50 -- # killprocess 182967 00:05:04.990 23:49:34 -- common/autotest_common.sh@936 -- # '[' -z 182967 ']' 00:05:04.990 23:49:34 -- common/autotest_common.sh@940 -- # kill -0 182967 00:05:04.990 23:49:34 -- common/autotest_common.sh@941 -- # uname 00:05:04.990 23:49:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.990 23:49:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 182967 00:05:04.990 23:49:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.990 23:49:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.990 23:49:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 182967' 00:05:04.990 killing process with pid 182967 00:05:04.990 23:49:34 -- common/autotest_common.sh@955 -- # kill 182967 00:05:04.990 23:49:34 -- common/autotest_common.sh@960 -- # wait 182967 00:05:04.990 23:49:34 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.990 23:49:34 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.990 00:05:04.990 real 0m6.498s 00:05:04.990 user 0m6.347s 00:05:04.990 sys 0m0.515s 00:05:04.990 23:49:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.990 23:49:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.990 ************************************ 00:05:04.990 END TEST skip_rpc_with_json 00:05:04.990 ************************************ 00:05:04.990 23:49:34 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:04.990 23:49:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.990 23:49:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.990 23:49:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.990 ************************************ 00:05:04.990 START TEST skip_rpc_with_delay 00:05:04.990 ************************************ 00:05:04.990 23:49:35 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:04.990 23:49:35 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.990 23:49:35 -- common/autotest_common.sh@638 -- # local es=0 00:05:04.990 23:49:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.990 23:49:35 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.990 23:49:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:04.990 23:49:35 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.990 23:49:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:04.991 23:49:35 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.991 23:49:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:04.991 23:49:35 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.991 23:49:35 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:04.991 23:49:35 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.991 [2024-04-26 23:49:35.193434] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:04.991 [2024-04-26 23:49:35.193507] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:04.991 23:49:35 -- common/autotest_common.sh@641 -- # es=1 00:05:04.991 23:49:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:04.991 23:49:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:04.991 23:49:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:04.991 00:05:04.991 real 0m0.065s 00:05:04.991 user 0m0.041s 00:05:04.991 sys 0m0.023s 00:05:04.991 23:49:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.991 23:49:35 -- common/autotest_common.sh@10 -- # set +x 00:05:04.991 ************************************ 00:05:04.991 END TEST skip_rpc_with_delay 00:05:04.991 ************************************ 00:05:05.252 23:49:35 -- rpc/skip_rpc.sh@77 -- # uname 00:05:05.252 23:49:35 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:05.252 23:49:35 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:05.252 23:49:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.252 23:49:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.252 23:49:35 -- common/autotest_common.sh@10 -- # set +x 00:05:05.252 ************************************ 00:05:05.252 START TEST exit_on_failed_rpc_init 00:05:05.252 ************************************ 00:05:05.252 23:49:35 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:05.252 23:49:35 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=184041 00:05:05.252 23:49:35 -- rpc/skip_rpc.sh@63 -- # waitforlisten 184041 00:05:05.252 23:49:35 -- common/autotest_common.sh@817 -- # '[' -z 184041 ']' 00:05:05.252 23:49:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.252 23:49:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:05.252 23:49:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.252 23:49:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:05.252 23:49:35 -- common/autotest_common.sh@10 -- # set +x 00:05:05.252 23:49:35 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.252 [2024-04-26 23:49:35.448288] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:05.252 [2024-04-26 23:49:35.448343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184041 ] 00:05:05.513 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.513 [2024-04-26 23:49:35.512829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.513 [2024-04-26 23:49:35.588312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.084 23:49:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:06.084 23:49:36 -- common/autotest_common.sh@850 -- # return 0 00:05:06.084 23:49:36 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.084 23:49:36 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.084 23:49:36 -- common/autotest_common.sh@638 -- # local es=0 00:05:06.084 23:49:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.084 23:49:36 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.084 23:49:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:06.084 23:49:36 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.084 23:49:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:06.084 23:49:36 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.084 23:49:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:06.084 23:49:36 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.084 23:49:36 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:06.084 23:49:36 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.084 [2024-04-26 23:49:36.253806] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:06.084 [2024-04-26 23:49:36.253869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184376 ] 00:05:06.084 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.344 [2024-04-26 23:49:36.312910] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.344 [2024-04-26 23:49:36.376639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.344 [2024-04-26 23:49:36.376703] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:06.344 [2024-04-26 23:49:36.376712] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:06.344 [2024-04-26 23:49:36.376719] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.344 23:49:36 -- common/autotest_common.sh@641 -- # es=234 00:05:06.344 23:49:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:06.344 23:49:36 -- common/autotest_common.sh@650 -- # es=106 00:05:06.344 23:49:36 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:06.344 23:49:36 -- common/autotest_common.sh@658 -- # es=1 00:05:06.344 23:49:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:06.344 23:49:36 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:06.344 23:49:36 -- rpc/skip_rpc.sh@70 -- # killprocess 184041 00:05:06.344 23:49:36 -- common/autotest_common.sh@936 -- # '[' -z 184041 ']' 00:05:06.344 23:49:36 -- common/autotest_common.sh@940 -- # kill -0 184041 00:05:06.344 23:49:36 -- common/autotest_common.sh@941 -- # uname 00:05:06.344 23:49:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:06.344 23:49:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 184041 00:05:06.344 23:49:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:06.344 23:49:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:06.344 23:49:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 184041' 00:05:06.344 killing process with pid 184041 00:05:06.344 23:49:36 -- common/autotest_common.sh@955 -- # kill 184041 00:05:06.344 23:49:36 -- common/autotest_common.sh@960 -- # wait 184041 00:05:06.605 00:05:06.605 real 0m1.299s 00:05:06.605 user 0m1.510s 00:05:06.605 sys 0m0.354s 00:05:06.605 23:49:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.605 23:49:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.605 ************************************ 00:05:06.605 END TEST exit_on_failed_rpc_init 00:05:06.605 ************************************ 00:05:06.605 23:49:36 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.605 00:05:06.605 real 0m13.967s 00:05:06.605 user 0m13.292s 00:05:06.605 sys 0m1.578s 00:05:06.605 23:49:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.605 23:49:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.605 ************************************ 00:05:06.605 END TEST skip_rpc 00:05:06.605 ************************************ 00:05:06.605 23:49:36 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.605 23:49:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.605 23:49:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.605 23:49:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.866 ************************************ 00:05:06.866 START TEST rpc_client 00:05:06.866 ************************************ 00:05:06.866 23:49:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.866 * Looking for test storage... 00:05:06.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:06.866 23:49:36 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:06.866 OK 00:05:06.866 23:49:37 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:06.866 00:05:06.866 real 0m0.121s 00:05:06.866 user 0m0.063s 00:05:06.866 sys 0m0.065s 00:05:06.866 23:49:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.866 23:49:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.866 ************************************ 00:05:06.866 END TEST rpc_client 00:05:06.866 ************************************ 00:05:06.866 23:49:37 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.866 23:49:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.866 23:49:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.866 23:49:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.127 ************************************ 00:05:07.127 START TEST json_config 00:05:07.127 ************************************ 00:05:07.127 23:49:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:07.127 23:49:37 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.127 23:49:37 -- nvmf/common.sh@7 -- # uname -s 00:05:07.127 23:49:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.127 23:49:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.127 23:49:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.127 23:49:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.127 23:49:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.127 23:49:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.127 23:49:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.127 23:49:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.127 23:49:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.127 23:49:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.127 23:49:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:07.127 23:49:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:07.127 23:49:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.127 23:49:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.127 23:49:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.127 23:49:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.127 23:49:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.127 23:49:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.127 23:49:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.127 23:49:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.127 23:49:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.127 23:49:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.127 23:49:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.127 23:49:37 -- paths/export.sh@5 -- # export PATH 00:05:07.127 23:49:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.127 23:49:37 -- nvmf/common.sh@47 -- # : 0 00:05:07.127 23:49:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:07.127 23:49:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:07.127 23:49:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.127 23:49:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.127 23:49:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.127 23:49:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:07.127 23:49:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:07.127 23:49:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:07.127 23:49:37 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:07.127 23:49:37 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:07.127 23:49:37 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:07.127 23:49:37 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:07.127 23:49:37 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:07.127 23:49:37 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:07.127 23:49:37 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:07.127 23:49:37 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:07.127 23:49:37 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:07.127 23:49:37 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:07.127 23:49:37 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:07.127 23:49:37 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:07.127 23:49:37 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:07.127 23:49:37 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:07.127 23:49:37 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.128 23:49:37 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:07.128 INFO: JSON configuration test init 00:05:07.128 23:49:37 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:07.128 23:49:37 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:07.128 23:49:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:07.128 23:49:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.128 23:49:37 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:07.128 23:49:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:07.128 23:49:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.128 23:49:37 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:07.128 23:49:37 -- json_config/common.sh@9 -- # local app=target 00:05:07.128 23:49:37 -- json_config/common.sh@10 -- # shift 00:05:07.128 23:49:37 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.128 23:49:37 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.128 23:49:37 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.128 23:49:37 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.128 23:49:37 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.128 23:49:37 -- json_config/common.sh@22 -- # app_pid["$app"]=184652 00:05:07.128 23:49:37 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.128 Waiting for target to run... 00:05:07.128 23:49:37 -- json_config/common.sh@25 -- # waitforlisten 184652 /var/tmp/spdk_tgt.sock 00:05:07.128 23:49:37 -- common/autotest_common.sh@817 -- # '[' -z 184652 ']' 00:05:07.128 23:49:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.128 23:49:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:07.128 23:49:37 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:07.128 23:49:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.128 23:49:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:07.128 23:49:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.388 [2024-04-26 23:49:37.386936] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:07.388 [2024-04-26 23:49:37.386986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184652 ] 00:05:07.388 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.648 [2024-04-26 23:49:37.665294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.648 [2024-04-26 23:49:37.717423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.219 23:49:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:08.219 23:49:38 -- common/autotest_common.sh@850 -- # return 0 00:05:08.219 23:49:38 -- json_config/common.sh@26 -- # echo '' 00:05:08.219 00:05:08.219 23:49:38 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:08.219 23:49:38 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:08.219 23:49:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:08.219 23:49:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.219 23:49:38 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:08.219 23:49:38 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:08.219 23:49:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:08.219 23:49:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.219 23:49:38 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:08.219 23:49:38 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:08.219 23:49:38 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:08.789 23:49:38 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:08.789 23:49:38 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:08.789 23:49:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:08.789 23:49:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.789 23:49:38 -- json_config/json_config.sh@45 -- # local ret=0 00:05:08.789 23:49:38 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:08.789 23:49:38 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:08.789 23:49:38 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:08.789 23:49:38 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:08.789 23:49:38 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:08.789 23:49:38 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:08.789 23:49:38 -- json_config/json_config.sh@48 -- # local get_types 00:05:08.789 23:49:38 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:08.789 23:49:38 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:08.789 23:49:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:08.789 23:49:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.789 23:49:38 -- json_config/json_config.sh@55 -- # return 0 00:05:08.789 23:49:38 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:08.789 23:49:38 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:08.789 23:49:38 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:08.789 23:49:38 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:08.789 23:49:38 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:08.789 23:49:38 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:08.789 23:49:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:08.789 23:49:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.789 23:49:38 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:08.789 23:49:38 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:08.789 23:49:38 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:08.789 23:49:38 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.789 23:49:38 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.049 MallocForNvmf0 00:05:09.049 23:49:39 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.049 23:49:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.049 MallocForNvmf1 00:05:09.049 23:49:39 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.049 23:49:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.309 [2024-04-26 23:49:39.377645] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.309 23:49:39 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.309 23:49:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.569 23:49:39 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.569 23:49:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.569 23:49:39 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.569 23:49:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.829 23:49:39 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.829 23:49:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.829 [2024-04-26 23:49:39.991669] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.829 23:49:40 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:09.829 23:49:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:09.829 23:49:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.829 23:49:40 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:09.829 23:49:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:09.829 23:49:40 -- common/autotest_common.sh@10 -- # set +x 00:05:10.089 23:49:40 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:10.089 23:49:40 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.089 23:49:40 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.089 MallocBdevForConfigChangeCheck 00:05:10.089 23:49:40 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:10.089 23:49:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:10.089 23:49:40 -- common/autotest_common.sh@10 -- # set +x 00:05:10.089 23:49:40 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:10.089 23:49:40 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.349 23:49:40 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:10.349 INFO: shutting down applications... 00:05:10.349 23:49:40 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:10.349 23:49:40 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:10.349 23:49:40 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:10.349 23:49:40 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:10.920 Calling clear_iscsi_subsystem 00:05:10.920 Calling clear_nvmf_subsystem 00:05:10.920 Calling clear_nbd_subsystem 00:05:10.920 Calling clear_ublk_subsystem 00:05:10.920 Calling clear_vhost_blk_subsystem 00:05:10.920 Calling clear_vhost_scsi_subsystem 00:05:10.920 Calling clear_bdev_subsystem 00:05:10.920 23:49:40 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:10.920 23:49:40 -- json_config/json_config.sh@343 -- # count=100 00:05:10.920 23:49:40 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:10.920 23:49:40 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.920 23:49:40 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:10.920 23:49:40 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:11.181 23:49:41 -- json_config/json_config.sh@345 -- # break 00:05:11.181 23:49:41 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:11.181 23:49:41 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:11.181 23:49:41 -- json_config/common.sh@31 -- # local app=target 00:05:11.181 23:49:41 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.181 23:49:41 -- json_config/common.sh@35 -- # [[ -n 184652 ]] 00:05:11.181 23:49:41 -- json_config/common.sh@38 -- # kill -SIGINT 184652 00:05:11.181 23:49:41 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.181 23:49:41 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.181 23:49:41 -- json_config/common.sh@41 -- # kill -0 184652 00:05:11.181 23:49:41 -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.754 23:49:41 -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.754 23:49:41 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.754 23:49:41 -- json_config/common.sh@41 -- # kill -0 184652 00:05:11.754 23:49:41 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.754 23:49:41 -- json_config/common.sh@43 -- # break 00:05:11.754 23:49:41 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.754 23:49:41 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.754 SPDK target shutdown done 00:05:11.754 23:49:41 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:11.754 INFO: relaunching applications... 00:05:11.754 23:49:41 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.754 23:49:41 -- json_config/common.sh@9 -- # local app=target 00:05:11.754 23:49:41 -- json_config/common.sh@10 -- # shift 00:05:11.754 23:49:41 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.754 23:49:41 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.754 23:49:41 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.754 23:49:41 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.754 23:49:41 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.754 23:49:41 -- json_config/common.sh@22 -- # app_pid["$app"]=185640 00:05:11.754 23:49:41 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.754 Waiting for target to run... 00:05:11.754 23:49:41 -- json_config/common.sh@25 -- # waitforlisten 185640 /var/tmp/spdk_tgt.sock 00:05:11.754 23:49:41 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.754 23:49:41 -- common/autotest_common.sh@817 -- # '[' -z 185640 ']' 00:05:11.754 23:49:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.754 23:49:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:11.754 23:49:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.754 23:49:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:11.754 23:49:41 -- common/autotest_common.sh@10 -- # set +x 00:05:11.754 [2024-04-26 23:49:41.900439] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:11.754 [2024-04-26 23:49:41.900499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185640 ] 00:05:11.754 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.057 [2024-04-26 23:49:42.159316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.057 [2024-04-26 23:49:42.217796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.672 [2024-04-26 23:49:42.704117] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.672 [2024-04-26 23:49:42.736518] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:12.672 23:49:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:12.672 23:49:42 -- common/autotest_common.sh@850 -- # return 0 00:05:12.672 23:49:42 -- json_config/common.sh@26 -- # echo '' 00:05:12.672 00:05:12.672 23:49:42 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:12.672 23:49:42 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:12.672 INFO: Checking if target configuration is the same... 00:05:12.672 23:49:42 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.672 23:49:42 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:12.672 23:49:42 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.672 + '[' 2 -ne 2 ']' 00:05:12.672 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:12.672 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:12.672 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.672 +++ basename /dev/fd/62 00:05:12.672 ++ mktemp /tmp/62.XXX 00:05:12.672 + tmp_file_1=/tmp/62.6k9 00:05:12.672 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.672 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:12.672 + tmp_file_2=/tmp/spdk_tgt_config.json.l9b 00:05:12.672 + ret=0 00:05:12.672 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.939 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.939 + diff -u /tmp/62.6k9 /tmp/spdk_tgt_config.json.l9b 00:05:12.939 + echo 'INFO: JSON config files are the same' 00:05:12.939 INFO: JSON config files are the same 00:05:12.939 + rm /tmp/62.6k9 /tmp/spdk_tgt_config.json.l9b 00:05:12.939 + exit 0 00:05:12.939 23:49:43 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:12.939 23:49:43 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:12.939 INFO: changing configuration and checking if this can be detected... 00:05:12.939 23:49:43 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.939 23:49:43 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.198 23:49:43 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.198 23:49:43 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:13.198 23:49:43 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.198 + '[' 2 -ne 2 ']' 00:05:13.198 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.198 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.198 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.198 +++ basename /dev/fd/62 00:05:13.198 ++ mktemp /tmp/62.XXX 00:05:13.198 + tmp_file_1=/tmp/62.KEx 00:05:13.198 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.198 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.198 + tmp_file_2=/tmp/spdk_tgt_config.json.HBA 00:05:13.198 + ret=0 00:05:13.198 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.459 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.459 + diff -u /tmp/62.KEx /tmp/spdk_tgt_config.json.HBA 00:05:13.459 + ret=1 00:05:13.459 + echo '=== Start of file: /tmp/62.KEx ===' 00:05:13.459 + cat /tmp/62.KEx 00:05:13.459 + echo '=== End of file: /tmp/62.KEx ===' 00:05:13.459 + echo '' 00:05:13.459 + echo '=== Start of file: /tmp/spdk_tgt_config.json.HBA ===' 00:05:13.459 + cat /tmp/spdk_tgt_config.json.HBA 00:05:13.459 + echo '=== End of file: /tmp/spdk_tgt_config.json.HBA ===' 00:05:13.459 + echo '' 00:05:13.459 + rm /tmp/62.KEx /tmp/spdk_tgt_config.json.HBA 00:05:13.459 + exit 1 00:05:13.459 23:49:43 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:13.459 INFO: configuration change detected. 00:05:13.459 23:49:43 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:13.459 23:49:43 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:13.459 23:49:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:13.459 23:49:43 -- common/autotest_common.sh@10 -- # set +x 00:05:13.459 23:49:43 -- json_config/json_config.sh@307 -- # local ret=0 00:05:13.459 23:49:43 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:13.459 23:49:43 -- json_config/json_config.sh@317 -- # [[ -n 185640 ]] 00:05:13.459 23:49:43 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:13.459 23:49:43 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:13.459 23:49:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:13.459 23:49:43 -- common/autotest_common.sh@10 -- # set +x 00:05:13.459 23:49:43 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:13.459 23:49:43 -- json_config/json_config.sh@193 -- # uname -s 00:05:13.459 23:49:43 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:13.459 23:49:43 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:13.459 23:49:43 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:13.459 23:49:43 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:13.459 23:49:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:13.459 23:49:43 -- common/autotest_common.sh@10 -- # set +x 00:05:13.720 23:49:43 -- json_config/json_config.sh@323 -- # killprocess 185640 00:05:13.720 23:49:43 -- common/autotest_common.sh@936 -- # '[' -z 185640 ']' 00:05:13.720 23:49:43 -- common/autotest_common.sh@940 -- # kill -0 185640 00:05:13.720 23:49:43 -- common/autotest_common.sh@941 -- # uname 00:05:13.720 23:49:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:13.720 23:49:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 185640 00:05:13.720 23:49:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:13.720 23:49:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:13.720 23:49:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 185640' 00:05:13.720 killing process with pid 185640 00:05:13.720 23:49:43 -- common/autotest_common.sh@955 -- # kill 185640 00:05:13.720 23:49:43 -- common/autotest_common.sh@960 -- # wait 185640 00:05:13.982 23:49:44 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.982 23:49:44 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:13.982 23:49:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:13.982 23:49:44 -- common/autotest_common.sh@10 -- # set +x 00:05:13.982 23:49:44 -- json_config/json_config.sh@328 -- # return 0 00:05:13.982 23:49:44 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:13.982 INFO: Success 00:05:13.982 00:05:13.982 real 0m6.866s 00:05:13.982 user 0m8.320s 00:05:13.982 sys 0m1.650s 00:05:13.982 23:49:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.982 23:49:44 -- common/autotest_common.sh@10 -- # set +x 00:05:13.982 ************************************ 00:05:13.982 END TEST json_config 00:05:13.982 ************************************ 00:05:13.982 23:49:44 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.982 23:49:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.982 23:49:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.982 23:49:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.243 ************************************ 00:05:14.243 START TEST json_config_extra_key 00:05:14.243 ************************************ 00:05:14.243 23:49:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.243 23:49:44 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.243 23:49:44 -- nvmf/common.sh@7 -- # uname -s 00:05:14.243 23:49:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.243 23:49:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.243 23:49:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.243 23:49:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.243 23:49:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.243 23:49:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.243 23:49:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.243 23:49:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.243 23:49:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.243 23:49:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.244 23:49:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:14.244 23:49:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:14.244 23:49:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.244 23:49:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.244 23:49:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.244 23:49:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.244 23:49:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:14.244 23:49:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.244 23:49:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.244 23:49:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.244 23:49:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.244 23:49:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.244 23:49:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.244 23:49:44 -- paths/export.sh@5 -- # export PATH 00:05:14.244 23:49:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.244 23:49:44 -- nvmf/common.sh@47 -- # : 0 00:05:14.244 23:49:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:14.244 23:49:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:14.244 23:49:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.244 23:49:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.244 23:49:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.244 23:49:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:14.244 23:49:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:14.244 23:49:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:14.244 INFO: launching applications... 00:05:14.244 23:49:44 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:14.244 23:49:44 -- json_config/common.sh@9 -- # local app=target 00:05:14.244 23:49:44 -- json_config/common.sh@10 -- # shift 00:05:14.244 23:49:44 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.244 23:49:44 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.244 23:49:44 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.244 23:49:44 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.244 23:49:44 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.244 23:49:44 -- json_config/common.sh@22 -- # app_pid["$app"]=186417 00:05:14.244 23:49:44 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.244 Waiting for target to run... 00:05:14.244 23:49:44 -- json_config/common.sh@25 -- # waitforlisten 186417 /var/tmp/spdk_tgt.sock 00:05:14.244 23:49:44 -- common/autotest_common.sh@817 -- # '[' -z 186417 ']' 00:05:14.244 23:49:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.244 23:49:44 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:14.244 23:49:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.244 23:49:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.244 23:49:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.244 23:49:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.244 [2024-04-26 23:49:44.437607] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:14.244 [2024-04-26 23:49:44.437686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid186417 ] 00:05:14.505 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.767 [2024-04-26 23:49:44.725811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.767 [2024-04-26 23:49:44.782404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.033 23:49:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:15.033 23:49:45 -- common/autotest_common.sh@850 -- # return 0 00:05:15.033 23:49:45 -- json_config/common.sh@26 -- # echo '' 00:05:15.033 00:05:15.033 23:49:45 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:15.033 INFO: shutting down applications... 00:05:15.033 23:49:45 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:15.033 23:49:45 -- json_config/common.sh@31 -- # local app=target 00:05:15.033 23:49:45 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:15.033 23:49:45 -- json_config/common.sh@35 -- # [[ -n 186417 ]] 00:05:15.033 23:49:45 -- json_config/common.sh@38 -- # kill -SIGINT 186417 00:05:15.033 23:49:45 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:15.033 23:49:45 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.033 23:49:45 -- json_config/common.sh@41 -- # kill -0 186417 00:05:15.033 23:49:45 -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.612 23:49:45 -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.612 23:49:45 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.612 23:49:45 -- json_config/common.sh@41 -- # kill -0 186417 00:05:15.612 23:49:45 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.612 23:49:45 -- json_config/common.sh@43 -- # break 00:05:15.612 23:49:45 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.612 23:49:45 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.612 SPDK target shutdown done 00:05:15.612 23:49:45 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:15.612 Success 00:05:15.612 00:05:15.612 real 0m1.432s 00:05:15.612 user 0m1.054s 00:05:15.612 sys 0m0.398s 00:05:15.612 23:49:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.612 23:49:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.612 ************************************ 00:05:15.612 END TEST json_config_extra_key 00:05:15.612 ************************************ 00:05:15.612 23:49:45 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.612 23:49:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.612 23:49:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.612 23:49:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.875 ************************************ 00:05:15.875 START TEST alias_rpc 00:05:15.875 ************************************ 00:05:15.875 23:49:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.875 * Looking for test storage... 00:05:15.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:15.875 23:49:45 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:15.875 23:49:45 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=186762 00:05:15.875 23:49:45 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 186762 00:05:15.875 23:49:45 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.875 23:49:45 -- common/autotest_common.sh@817 -- # '[' -z 186762 ']' 00:05:15.875 23:49:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.875 23:49:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:15.875 23:49:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.875 23:49:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:15.875 23:49:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.875 [2024-04-26 23:49:46.053224] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:15.875 [2024-04-26 23:49:46.053294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid186762 ] 00:05:15.875 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.136 [2024-04-26 23:49:46.116788] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.136 [2024-04-26 23:49:46.181583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.709 23:49:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:16.709 23:49:46 -- common/autotest_common.sh@850 -- # return 0 00:05:16.709 23:49:46 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:16.970 23:49:46 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 186762 00:05:16.970 23:49:46 -- common/autotest_common.sh@936 -- # '[' -z 186762 ']' 00:05:16.970 23:49:46 -- common/autotest_common.sh@940 -- # kill -0 186762 00:05:16.970 23:49:46 -- common/autotest_common.sh@941 -- # uname 00:05:16.970 23:49:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.970 23:49:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 186762 00:05:16.970 23:49:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:16.970 23:49:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:16.970 23:49:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 186762' 00:05:16.970 killing process with pid 186762 00:05:16.970 23:49:47 -- common/autotest_common.sh@955 -- # kill 186762 00:05:16.970 23:49:47 -- common/autotest_common.sh@960 -- # wait 186762 00:05:17.232 00:05:17.232 real 0m1.336s 00:05:17.232 user 0m1.468s 00:05:17.232 sys 0m0.338s 00:05:17.232 23:49:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.232 23:49:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.232 ************************************ 00:05:17.232 END TEST alias_rpc 00:05:17.232 ************************************ 00:05:17.232 23:49:47 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:17.232 23:49:47 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.232 23:49:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.232 23:49:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.232 23:49:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.232 ************************************ 00:05:17.232 START TEST spdkcli_tcp 00:05:17.232 ************************************ 00:05:17.232 23:49:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.494 * Looking for test storage... 00:05:17.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:17.494 23:49:47 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:17.494 23:49:47 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:17.494 23:49:47 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:17.494 23:49:47 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:17.494 23:49:47 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:17.494 23:49:47 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:17.494 23:49:47 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:17.494 23:49:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:17.494 23:49:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.494 23:49:47 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=187077 00:05:17.494 23:49:47 -- spdkcli/tcp.sh@27 -- # waitforlisten 187077 00:05:17.494 23:49:47 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:17.494 23:49:47 -- common/autotest_common.sh@817 -- # '[' -z 187077 ']' 00:05:17.494 23:49:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.494 23:49:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.494 23:49:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.494 23:49:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.494 23:49:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.494 [2024-04-26 23:49:47.581001] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:17.494 [2024-04-26 23:49:47.581062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187077 ] 00:05:17.494 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.494 [2024-04-26 23:49:47.648919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.756 [2024-04-26 23:49:47.724978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.756 [2024-04-26 23:49:47.725072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.329 23:49:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:18.330 23:49:48 -- common/autotest_common.sh@850 -- # return 0 00:05:18.330 23:49:48 -- spdkcli/tcp.sh@31 -- # socat_pid=187233 00:05:18.330 23:49:48 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:18.330 23:49:48 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.330 [ 00:05:18.330 "bdev_malloc_delete", 00:05:18.330 "bdev_malloc_create", 00:05:18.330 "bdev_null_resize", 00:05:18.330 "bdev_null_delete", 00:05:18.330 "bdev_null_create", 00:05:18.330 "bdev_nvme_cuse_unregister", 00:05:18.330 "bdev_nvme_cuse_register", 00:05:18.330 "bdev_opal_new_user", 00:05:18.330 "bdev_opal_set_lock_state", 00:05:18.330 "bdev_opal_delete", 00:05:18.330 "bdev_opal_get_info", 00:05:18.330 "bdev_opal_create", 00:05:18.330 "bdev_nvme_opal_revert", 00:05:18.330 "bdev_nvme_opal_init", 00:05:18.330 "bdev_nvme_send_cmd", 00:05:18.330 "bdev_nvme_get_path_iostat", 00:05:18.330 "bdev_nvme_get_mdns_discovery_info", 00:05:18.330 "bdev_nvme_stop_mdns_discovery", 00:05:18.330 "bdev_nvme_start_mdns_discovery", 00:05:18.330 "bdev_nvme_set_multipath_policy", 00:05:18.330 "bdev_nvme_set_preferred_path", 00:05:18.330 "bdev_nvme_get_io_paths", 00:05:18.330 "bdev_nvme_remove_error_injection", 00:05:18.330 "bdev_nvme_add_error_injection", 00:05:18.330 "bdev_nvme_get_discovery_info", 00:05:18.330 "bdev_nvme_stop_discovery", 00:05:18.330 "bdev_nvme_start_discovery", 00:05:18.330 "bdev_nvme_get_controller_health_info", 00:05:18.330 "bdev_nvme_disable_controller", 00:05:18.330 "bdev_nvme_enable_controller", 00:05:18.330 "bdev_nvme_reset_controller", 00:05:18.330 "bdev_nvme_get_transport_statistics", 00:05:18.330 "bdev_nvme_apply_firmware", 00:05:18.330 "bdev_nvme_detach_controller", 00:05:18.330 "bdev_nvme_get_controllers", 00:05:18.330 "bdev_nvme_attach_controller", 00:05:18.330 "bdev_nvme_set_hotplug", 00:05:18.330 "bdev_nvme_set_options", 00:05:18.330 "bdev_passthru_delete", 00:05:18.330 "bdev_passthru_create", 00:05:18.330 "bdev_lvol_grow_lvstore", 00:05:18.330 "bdev_lvol_get_lvols", 00:05:18.330 "bdev_lvol_get_lvstores", 00:05:18.330 "bdev_lvol_delete", 00:05:18.330 "bdev_lvol_set_read_only", 00:05:18.330 "bdev_lvol_resize", 00:05:18.330 "bdev_lvol_decouple_parent", 00:05:18.330 "bdev_lvol_inflate", 00:05:18.330 "bdev_lvol_rename", 00:05:18.330 "bdev_lvol_clone_bdev", 00:05:18.330 "bdev_lvol_clone", 00:05:18.330 "bdev_lvol_snapshot", 00:05:18.330 "bdev_lvol_create", 00:05:18.330 "bdev_lvol_delete_lvstore", 00:05:18.330 "bdev_lvol_rename_lvstore", 00:05:18.330 "bdev_lvol_create_lvstore", 00:05:18.330 "bdev_raid_set_options", 00:05:18.330 "bdev_raid_remove_base_bdev", 00:05:18.330 "bdev_raid_add_base_bdev", 00:05:18.330 "bdev_raid_delete", 00:05:18.330 "bdev_raid_create", 00:05:18.330 "bdev_raid_get_bdevs", 00:05:18.330 "bdev_error_inject_error", 00:05:18.330 "bdev_error_delete", 00:05:18.330 "bdev_error_create", 00:05:18.330 "bdev_split_delete", 00:05:18.330 "bdev_split_create", 00:05:18.330 "bdev_delay_delete", 00:05:18.330 "bdev_delay_create", 00:05:18.330 "bdev_delay_update_latency", 00:05:18.330 "bdev_zone_block_delete", 00:05:18.330 "bdev_zone_block_create", 00:05:18.330 "blobfs_create", 00:05:18.330 "blobfs_detect", 00:05:18.330 "blobfs_set_cache_size", 00:05:18.330 "bdev_aio_delete", 00:05:18.330 "bdev_aio_rescan", 00:05:18.330 "bdev_aio_create", 00:05:18.330 "bdev_ftl_set_property", 00:05:18.330 "bdev_ftl_get_properties", 00:05:18.330 "bdev_ftl_get_stats", 00:05:18.330 "bdev_ftl_unmap", 00:05:18.330 "bdev_ftl_unload", 00:05:18.330 "bdev_ftl_delete", 00:05:18.330 "bdev_ftl_load", 00:05:18.330 "bdev_ftl_create", 00:05:18.330 "bdev_virtio_attach_controller", 00:05:18.330 "bdev_virtio_scsi_get_devices", 00:05:18.330 "bdev_virtio_detach_controller", 00:05:18.330 "bdev_virtio_blk_set_hotplug", 00:05:18.330 "bdev_iscsi_delete", 00:05:18.330 "bdev_iscsi_create", 00:05:18.330 "bdev_iscsi_set_options", 00:05:18.330 "accel_error_inject_error", 00:05:18.330 "ioat_scan_accel_module", 00:05:18.330 "dsa_scan_accel_module", 00:05:18.330 "iaa_scan_accel_module", 00:05:18.330 "vfu_virtio_create_scsi_endpoint", 00:05:18.330 "vfu_virtio_scsi_remove_target", 00:05:18.330 "vfu_virtio_scsi_add_target", 00:05:18.330 "vfu_virtio_create_blk_endpoint", 00:05:18.330 "vfu_virtio_delete_endpoint", 00:05:18.330 "keyring_file_remove_key", 00:05:18.330 "keyring_file_add_key", 00:05:18.330 "iscsi_get_histogram", 00:05:18.330 "iscsi_enable_histogram", 00:05:18.330 "iscsi_set_options", 00:05:18.330 "iscsi_get_auth_groups", 00:05:18.330 "iscsi_auth_group_remove_secret", 00:05:18.330 "iscsi_auth_group_add_secret", 00:05:18.330 "iscsi_delete_auth_group", 00:05:18.330 "iscsi_create_auth_group", 00:05:18.330 "iscsi_set_discovery_auth", 00:05:18.330 "iscsi_get_options", 00:05:18.330 "iscsi_target_node_request_logout", 00:05:18.330 "iscsi_target_node_set_redirect", 00:05:18.330 "iscsi_target_node_set_auth", 00:05:18.330 "iscsi_target_node_add_lun", 00:05:18.330 "iscsi_get_stats", 00:05:18.330 "iscsi_get_connections", 00:05:18.330 "iscsi_portal_group_set_auth", 00:05:18.330 "iscsi_start_portal_group", 00:05:18.330 "iscsi_delete_portal_group", 00:05:18.330 "iscsi_create_portal_group", 00:05:18.330 "iscsi_get_portal_groups", 00:05:18.330 "iscsi_delete_target_node", 00:05:18.330 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.330 "iscsi_target_node_add_pg_ig_maps", 00:05:18.330 "iscsi_create_target_node", 00:05:18.330 "iscsi_get_target_nodes", 00:05:18.330 "iscsi_delete_initiator_group", 00:05:18.330 "iscsi_initiator_group_remove_initiators", 00:05:18.330 "iscsi_initiator_group_add_initiators", 00:05:18.330 "iscsi_create_initiator_group", 00:05:18.330 "iscsi_get_initiator_groups", 00:05:18.330 "nvmf_set_crdt", 00:05:18.330 "nvmf_set_config", 00:05:18.330 "nvmf_set_max_subsystems", 00:05:18.330 "nvmf_subsystem_get_listeners", 00:05:18.330 "nvmf_subsystem_get_qpairs", 00:05:18.330 "nvmf_subsystem_get_controllers", 00:05:18.330 "nvmf_get_stats", 00:05:18.330 "nvmf_get_transports", 00:05:18.330 "nvmf_create_transport", 00:05:18.330 "nvmf_get_targets", 00:05:18.330 "nvmf_delete_target", 00:05:18.330 "nvmf_create_target", 00:05:18.330 "nvmf_subsystem_allow_any_host", 00:05:18.330 "nvmf_subsystem_remove_host", 00:05:18.330 "nvmf_subsystem_add_host", 00:05:18.330 "nvmf_ns_remove_host", 00:05:18.330 "nvmf_ns_add_host", 00:05:18.330 "nvmf_subsystem_remove_ns", 00:05:18.330 "nvmf_subsystem_add_ns", 00:05:18.330 "nvmf_subsystem_listener_set_ana_state", 00:05:18.330 "nvmf_discovery_get_referrals", 00:05:18.330 "nvmf_discovery_remove_referral", 00:05:18.330 "nvmf_discovery_add_referral", 00:05:18.330 "nvmf_subsystem_remove_listener", 00:05:18.330 "nvmf_subsystem_add_listener", 00:05:18.330 "nvmf_delete_subsystem", 00:05:18.330 "nvmf_create_subsystem", 00:05:18.330 "nvmf_get_subsystems", 00:05:18.330 "env_dpdk_get_mem_stats", 00:05:18.330 "nbd_get_disks", 00:05:18.330 "nbd_stop_disk", 00:05:18.330 "nbd_start_disk", 00:05:18.330 "ublk_recover_disk", 00:05:18.330 "ublk_get_disks", 00:05:18.330 "ublk_stop_disk", 00:05:18.330 "ublk_start_disk", 00:05:18.330 "ublk_destroy_target", 00:05:18.330 "ublk_create_target", 00:05:18.330 "virtio_blk_create_transport", 00:05:18.330 "virtio_blk_get_transports", 00:05:18.330 "vhost_controller_set_coalescing", 00:05:18.330 "vhost_get_controllers", 00:05:18.330 "vhost_delete_controller", 00:05:18.330 "vhost_create_blk_controller", 00:05:18.330 "vhost_scsi_controller_remove_target", 00:05:18.330 "vhost_scsi_controller_add_target", 00:05:18.330 "vhost_start_scsi_controller", 00:05:18.330 "vhost_create_scsi_controller", 00:05:18.330 "thread_set_cpumask", 00:05:18.330 "framework_get_scheduler", 00:05:18.330 "framework_set_scheduler", 00:05:18.330 "framework_get_reactors", 00:05:18.330 "thread_get_io_channels", 00:05:18.330 "thread_get_pollers", 00:05:18.330 "thread_get_stats", 00:05:18.330 "framework_monitor_context_switch", 00:05:18.330 "spdk_kill_instance", 00:05:18.330 "log_enable_timestamps", 00:05:18.330 "log_get_flags", 00:05:18.330 "log_clear_flag", 00:05:18.330 "log_set_flag", 00:05:18.330 "log_get_level", 00:05:18.330 "log_set_level", 00:05:18.330 "log_get_print_level", 00:05:18.330 "log_set_print_level", 00:05:18.330 "framework_enable_cpumask_locks", 00:05:18.330 "framework_disable_cpumask_locks", 00:05:18.330 "framework_wait_init", 00:05:18.330 "framework_start_init", 00:05:18.330 "scsi_get_devices", 00:05:18.330 "bdev_get_histogram", 00:05:18.330 "bdev_enable_histogram", 00:05:18.330 "bdev_set_qos_limit", 00:05:18.330 "bdev_set_qd_sampling_period", 00:05:18.330 "bdev_get_bdevs", 00:05:18.330 "bdev_reset_iostat", 00:05:18.330 "bdev_get_iostat", 00:05:18.330 "bdev_examine", 00:05:18.330 "bdev_wait_for_examine", 00:05:18.330 "bdev_set_options", 00:05:18.330 "notify_get_notifications", 00:05:18.330 "notify_get_types", 00:05:18.330 "accel_get_stats", 00:05:18.330 "accel_set_options", 00:05:18.330 "accel_set_driver", 00:05:18.330 "accel_crypto_key_destroy", 00:05:18.330 "accel_crypto_keys_get", 00:05:18.330 "accel_crypto_key_create", 00:05:18.330 "accel_assign_opc", 00:05:18.330 "accel_get_module_info", 00:05:18.330 "accel_get_opc_assignments", 00:05:18.330 "vmd_rescan", 00:05:18.330 "vmd_remove_device", 00:05:18.330 "vmd_enable", 00:05:18.330 "sock_get_default_impl", 00:05:18.330 "sock_set_default_impl", 00:05:18.330 "sock_impl_set_options", 00:05:18.330 "sock_impl_get_options", 00:05:18.330 "iobuf_get_stats", 00:05:18.330 "iobuf_set_options", 00:05:18.330 "keyring_get_keys", 00:05:18.331 "framework_get_pci_devices", 00:05:18.331 "framework_get_config", 00:05:18.331 "framework_get_subsystems", 00:05:18.331 "vfu_tgt_set_base_path", 00:05:18.331 "trace_get_info", 00:05:18.331 "trace_get_tpoint_group_mask", 00:05:18.331 "trace_disable_tpoint_group", 00:05:18.331 "trace_enable_tpoint_group", 00:05:18.331 "trace_clear_tpoint_mask", 00:05:18.331 "trace_set_tpoint_mask", 00:05:18.331 "spdk_get_version", 00:05:18.331 "rpc_get_methods" 00:05:18.331 ] 00:05:18.331 23:49:48 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.331 23:49:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:18.331 23:49:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.331 23:49:48 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.331 23:49:48 -- spdkcli/tcp.sh@38 -- # killprocess 187077 00:05:18.331 23:49:48 -- common/autotest_common.sh@936 -- # '[' -z 187077 ']' 00:05:18.331 23:49:48 -- common/autotest_common.sh@940 -- # kill -0 187077 00:05:18.331 23:49:48 -- common/autotest_common.sh@941 -- # uname 00:05:18.331 23:49:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.331 23:49:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 187077 00:05:18.592 23:49:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.592 23:49:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.592 23:49:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 187077' 00:05:18.592 killing process with pid 187077 00:05:18.592 23:49:48 -- common/autotest_common.sh@955 -- # kill 187077 00:05:18.592 23:49:48 -- common/autotest_common.sh@960 -- # wait 187077 00:05:18.592 00:05:18.592 real 0m1.397s 00:05:18.592 user 0m2.545s 00:05:18.592 sys 0m0.425s 00:05:18.592 23:49:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.592 23:49:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.592 ************************************ 00:05:18.592 END TEST spdkcli_tcp 00:05:18.592 ************************************ 00:05:18.853 23:49:48 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:18.853 23:49:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.853 23:49:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.853 23:49:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.853 ************************************ 00:05:18.853 START TEST dpdk_mem_utility 00:05:18.853 ************************************ 00:05:18.853 23:49:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.115 * Looking for test storage... 00:05:19.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:19.115 23:49:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:19.115 23:49:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=187453 00:05:19.115 23:49:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 187453 00:05:19.115 23:49:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.115 23:49:49 -- common/autotest_common.sh@817 -- # '[' -z 187453 ']' 00:05:19.115 23:49:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.115 23:49:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:19.115 23:49:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.115 23:49:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:19.115 23:49:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.115 [2024-04-26 23:49:49.166300] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:19.115 [2024-04-26 23:49:49.166367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187453 ] 00:05:19.115 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.115 [2024-04-26 23:49:49.234964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.115 [2024-04-26 23:49:49.308978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.058 23:49:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:20.058 23:49:49 -- common/autotest_common.sh@850 -- # return 0 00:05:20.058 23:49:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:20.058 23:49:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:20.058 23:49:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.058 23:49:49 -- common/autotest_common.sh@10 -- # set +x 00:05:20.058 { 00:05:20.058 "filename": "/tmp/spdk_mem_dump.txt" 00:05:20.058 } 00:05:20.058 23:49:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.058 23:49:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:20.058 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:20.058 1 heaps totaling size 814.000000 MiB 00:05:20.058 size: 814.000000 MiB heap id: 0 00:05:20.058 end heaps---------- 00:05:20.058 8 mempools totaling size 598.116089 MiB 00:05:20.058 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:20.058 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:20.058 size: 84.521057 MiB name: bdev_io_187453 00:05:20.058 size: 51.011292 MiB name: evtpool_187453 00:05:20.058 size: 50.003479 MiB name: msgpool_187453 00:05:20.058 size: 21.763794 MiB name: PDU_Pool 00:05:20.058 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:20.059 size: 0.026123 MiB name: Session_Pool 00:05:20.059 end mempools------- 00:05:20.059 6 memzones totaling size 4.142822 MiB 00:05:20.059 size: 1.000366 MiB name: RG_ring_0_187453 00:05:20.059 size: 1.000366 MiB name: RG_ring_1_187453 00:05:20.059 size: 1.000366 MiB name: RG_ring_4_187453 00:05:20.059 size: 1.000366 MiB name: RG_ring_5_187453 00:05:20.059 size: 0.125366 MiB name: RG_ring_2_187453 00:05:20.059 size: 0.015991 MiB name: RG_ring_3_187453 00:05:20.059 end memzones------- 00:05:20.059 23:49:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:20.059 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:20.059 list of free elements. size: 12.519348 MiB 00:05:20.059 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:20.059 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:20.059 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:20.059 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:20.059 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:20.059 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:20.059 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:20.059 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:20.059 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:20.059 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:20.059 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:20.059 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:20.059 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:20.059 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:20.059 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:20.059 list of standard malloc elements. size: 199.218079 MiB 00:05:20.059 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:20.059 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:20.059 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:20.059 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:20.059 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:20.059 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:20.059 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:20.059 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:20.059 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:20.059 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:20.059 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:20.059 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:20.059 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:20.059 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:20.059 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:20.059 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:20.059 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:20.059 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:20.059 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:20.059 list of memzone associated elements. size: 602.262573 MiB 00:05:20.059 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:20.059 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:20.059 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:20.059 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:20.059 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:20.059 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_187453_0 00:05:20.059 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:20.059 associated memzone info: size: 48.002930 MiB name: MP_evtpool_187453_0 00:05:20.059 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:20.059 associated memzone info: size: 48.002930 MiB name: MP_msgpool_187453_0 00:05:20.059 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:20.059 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:20.059 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:20.059 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:20.059 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:20.059 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_187453 00:05:20.059 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:20.059 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_187453 00:05:20.059 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:20.059 associated memzone info: size: 1.007996 MiB name: MP_evtpool_187453 00:05:20.059 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:20.059 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:20.059 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:20.059 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:20.059 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:20.059 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:20.059 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:20.059 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:20.059 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:20.059 associated memzone info: size: 1.000366 MiB name: RG_ring_0_187453 00:05:20.059 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:20.059 associated memzone info: size: 1.000366 MiB name: RG_ring_1_187453 00:05:20.059 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:20.059 associated memzone info: size: 1.000366 MiB name: RG_ring_4_187453 00:05:20.059 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:20.059 associated memzone info: size: 1.000366 MiB name: RG_ring_5_187453 00:05:20.059 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:20.059 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_187453 00:05:20.059 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:20.059 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:20.059 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:20.059 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:20.059 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:20.059 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:20.059 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:20.059 associated memzone info: size: 0.125366 MiB name: RG_ring_2_187453 00:05:20.059 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:20.059 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:20.059 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:20.059 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:20.059 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:20.059 associated memzone info: size: 0.015991 MiB name: RG_ring_3_187453 00:05:20.059 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:20.059 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:20.059 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:20.059 associated memzone info: size: 0.000183 MiB name: MP_msgpool_187453 00:05:20.059 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:20.059 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_187453 00:05:20.059 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:20.059 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:20.059 23:49:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:20.059 23:49:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 187453 00:05:20.059 23:49:50 -- common/autotest_common.sh@936 -- # '[' -z 187453 ']' 00:05:20.059 23:49:50 -- common/autotest_common.sh@940 -- # kill -0 187453 00:05:20.059 23:49:50 -- common/autotest_common.sh@941 -- # uname 00:05:20.059 23:49:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.059 23:49:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 187453 00:05:20.059 23:49:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.059 23:49:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.059 23:49:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 187453' 00:05:20.059 killing process with pid 187453 00:05:20.059 23:49:50 -- common/autotest_common.sh@955 -- # kill 187453 00:05:20.059 23:49:50 -- common/autotest_common.sh@960 -- # wait 187453 00:05:20.321 00:05:20.321 real 0m1.303s 00:05:20.321 user 0m1.351s 00:05:20.321 sys 0m0.397s 00:05:20.321 23:49:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.321 23:49:50 -- common/autotest_common.sh@10 -- # set +x 00:05:20.321 ************************************ 00:05:20.321 END TEST dpdk_mem_utility 00:05:20.321 ************************************ 00:05:20.321 23:49:50 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.321 23:49:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.321 23:49:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.321 23:49:50 -- common/autotest_common.sh@10 -- # set +x 00:05:20.321 ************************************ 00:05:20.321 START TEST event 00:05:20.321 ************************************ 00:05:20.321 23:49:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.582 * Looking for test storage... 00:05:20.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:20.582 23:49:50 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:20.582 23:49:50 -- bdev/nbd_common.sh@6 -- # set -e 00:05:20.582 23:49:50 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.582 23:49:50 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:20.582 23:49:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.583 23:49:50 -- common/autotest_common.sh@10 -- # set +x 00:05:20.583 ************************************ 00:05:20.583 START TEST event_perf 00:05:20.583 ************************************ 00:05:20.583 23:49:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.583 Running I/O for 1 seconds...[2024-04-26 23:49:50.780362] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:20.583 [2024-04-26 23:49:50.780476] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187822 ] 00:05:20.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.843 [2024-04-26 23:49:50.850505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.843 [2024-04-26 23:49:50.929723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.843 [2024-04-26 23:49:50.929927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.843 [2024-04-26 23:49:50.930215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.843 [2024-04-26 23:49:50.930216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.787 Running I/O for 1 seconds... 00:05:21.787 lcore 0: 171168 00:05:21.787 lcore 1: 171169 00:05:21.787 lcore 2: 171165 00:05:21.787 lcore 3: 171168 00:05:21.787 done. 00:05:21.787 00:05:21.787 real 0m1.225s 00:05:21.787 user 0m4.135s 00:05:21.787 sys 0m0.089s 00:05:21.787 23:49:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.787 23:49:51 -- common/autotest_common.sh@10 -- # set +x 00:05:21.787 ************************************ 00:05:21.787 END TEST event_perf 00:05:21.787 ************************************ 00:05:22.048 23:49:52 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.048 23:49:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:22.049 23:49:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.049 23:49:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.049 ************************************ 00:05:22.049 START TEST event_reactor 00:05:22.049 ************************************ 00:05:22.049 23:49:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.049 [2024-04-26 23:49:52.195791] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:22.049 [2024-04-26 23:49:52.195894] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188090 ] 00:05:22.049 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.049 [2024-04-26 23:49:52.259536] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.309 [2024-04-26 23:49:52.324356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.268 test_start 00:05:23.268 oneshot 00:05:23.268 tick 100 00:05:23.268 tick 100 00:05:23.268 tick 250 00:05:23.268 tick 100 00:05:23.268 tick 100 00:05:23.268 tick 250 00:05:23.268 tick 100 00:05:23.268 tick 500 00:05:23.268 tick 100 00:05:23.268 tick 100 00:05:23.268 tick 250 00:05:23.268 tick 100 00:05:23.268 tick 100 00:05:23.268 test_end 00:05:23.268 00:05:23.268 real 0m1.203s 00:05:23.268 user 0m1.133s 00:05:23.268 sys 0m0.066s 00:05:23.268 23:49:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.268 23:49:53 -- common/autotest_common.sh@10 -- # set +x 00:05:23.268 ************************************ 00:05:23.268 END TEST event_reactor 00:05:23.268 ************************************ 00:05:23.268 23:49:53 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.268 23:49:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:23.268 23:49:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.268 23:49:53 -- common/autotest_common.sh@10 -- # set +x 00:05:23.529 ************************************ 00:05:23.529 START TEST event_reactor_perf 00:05:23.529 ************************************ 00:05:23.529 23:49:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.529 [2024-04-26 23:49:53.591392] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:23.529 [2024-04-26 23:49:53.591495] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188444 ] 00:05:23.529 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.529 [2024-04-26 23:49:53.657545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.529 [2024-04-26 23:49:53.730392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.913 test_start 00:05:24.913 test_end 00:05:24.913 Performance: 363200 events per second 00:05:24.913 00:05:24.913 real 0m1.213s 00:05:24.913 user 0m1.131s 00:05:24.913 sys 0m0.077s 00:05:24.913 23:49:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.913 23:49:54 -- common/autotest_common.sh@10 -- # set +x 00:05:24.913 ************************************ 00:05:24.913 END TEST event_reactor_perf 00:05:24.913 ************************************ 00:05:24.913 23:49:54 -- event/event.sh@49 -- # uname -s 00:05:24.913 23:49:54 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:24.913 23:49:54 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.913 23:49:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.913 23:49:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.913 23:49:54 -- common/autotest_common.sh@10 -- # set +x 00:05:24.913 ************************************ 00:05:24.913 START TEST event_scheduler 00:05:24.913 ************************************ 00:05:24.913 23:49:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.913 * Looking for test storage... 00:05:24.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:24.913 23:49:55 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.913 23:49:55 -- scheduler/scheduler.sh@35 -- # scheduler_pid=188831 00:05:24.913 23:49:55 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.913 23:49:55 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.913 23:49:55 -- scheduler/scheduler.sh@37 -- # waitforlisten 188831 00:05:24.913 23:49:55 -- common/autotest_common.sh@817 -- # '[' -z 188831 ']' 00:05:24.913 23:49:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.913 23:49:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.913 23:49:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.913 23:49:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.913 23:49:55 -- common/autotest_common.sh@10 -- # set +x 00:05:25.173 [2024-04-26 23:49:55.135293] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:25.173 [2024-04-26 23:49:55.135359] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188831 ] 00:05:25.173 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.173 [2024-04-26 23:49:55.193584] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.173 [2024-04-26 23:49:55.257768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.173 [2024-04-26 23:49:55.257911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.173 [2024-04-26 23:49:55.257968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.173 [2024-04-26 23:49:55.257969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.743 23:49:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:25.743 23:49:55 -- common/autotest_common.sh@850 -- # return 0 00:05:25.743 23:49:55 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.743 23:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:25.743 23:49:55 -- common/autotest_common.sh@10 -- # set +x 00:05:25.743 POWER: Env isn't set yet! 00:05:25.743 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:25.743 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.743 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.743 POWER: Attempting to initialise PSTAT power management... 00:05:25.743 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:25.743 POWER: Initialized successfully for lcore 0 power management 00:05:25.743 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:25.743 POWER: Initialized successfully for lcore 1 power management 00:05:25.743 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:25.743 POWER: Initialized successfully for lcore 2 power management 00:05:26.004 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:26.004 POWER: Initialized successfully for lcore 3 power management 00:05:26.004 23:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.004 23:49:55 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:26.004 23:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.004 23:49:55 -- common/autotest_common.sh@10 -- # set +x 00:05:26.004 [2024-04-26 23:49:56.031330] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:26.004 23:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.004 23:49:56 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:26.004 23:49:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.004 23:49:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.004 23:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.004 ************************************ 00:05:26.004 START TEST scheduler_create_thread 00:05:26.004 ************************************ 00:05:26.004 23:49:56 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:26.004 23:49:56 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:26.004 23:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.004 23:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.004 2 00:05:26.004 23:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.004 23:49:56 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:26.004 23:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.004 23:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.004 3 00:05:26.004 23:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.004 23:49:56 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:26.004 23:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.004 23:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.265 4 00:05:26.265 23:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.265 23:49:56 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:26.265 23:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.265 23:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.265 5 00:05:26.265 23:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.265 23:49:56 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:26.265 23:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.265 23:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.265 6 00:05:26.265 23:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.266 23:49:56 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:26.266 23:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.266 23:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.266 7 00:05:26.266 23:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.266 23:49:56 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:26.266 23:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.266 23:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.266 8 00:05:26.266 23:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.266 23:49:56 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:26.266 23:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.266 23:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.266 9 00:05:26.266 23:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.266 23:49:56 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:26.266 23:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:26.266 23:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:27.650 10 00:05:27.650 23:49:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:27.650 23:49:57 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:27.650 23:49:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.650 23:49:57 -- common/autotest_common.sh@10 -- # set +x 00:05:28.594 23:49:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.594 23:49:58 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:28.594 23:49:58 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:28.594 23:49:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.594 23:49:58 -- common/autotest_common.sh@10 -- # set +x 00:05:29.537 23:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:29.537 23:49:59 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:29.537 23:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:29.537 23:49:59 -- common/autotest_common.sh@10 -- # set +x 00:05:30.478 23:50:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.478 23:50:00 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:30.478 23:50:00 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:30.478 23:50:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.478 23:50:00 -- common/autotest_common.sh@10 -- # set +x 00:05:31.419 23:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.419 00:05:31.420 real 0m5.099s 00:05:31.420 user 0m0.027s 00:05:31.420 sys 0m0.003s 00:05:31.420 23:50:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.420 23:50:01 -- common/autotest_common.sh@10 -- # set +x 00:05:31.420 ************************************ 00:05:31.420 END TEST scheduler_create_thread 00:05:31.420 ************************************ 00:05:31.420 23:50:01 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:31.420 23:50:01 -- scheduler/scheduler.sh@46 -- # killprocess 188831 00:05:31.420 23:50:01 -- common/autotest_common.sh@936 -- # '[' -z 188831 ']' 00:05:31.420 23:50:01 -- common/autotest_common.sh@940 -- # kill -0 188831 00:05:31.420 23:50:01 -- common/autotest_common.sh@941 -- # uname 00:05:31.420 23:50:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.420 23:50:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 188831 00:05:31.420 23:50:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:31.420 23:50:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:31.420 23:50:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 188831' 00:05:31.420 killing process with pid 188831 00:05:31.420 23:50:01 -- common/autotest_common.sh@955 -- # kill 188831 00:05:31.420 23:50:01 -- common/autotest_common.sh@960 -- # wait 188831 00:05:31.681 [2024-04-26 23:50:01.670714] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:31.681 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:31.681 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:31.681 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:31.681 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:31.681 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:31.681 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:31.681 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:31.681 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:31.681 00:05:31.681 real 0m6.893s 00:05:31.681 user 0m13.418s 00:05:31.681 sys 0m0.432s 00:05:31.681 23:50:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.681 23:50:01 -- common/autotest_common.sh@10 -- # set +x 00:05:31.681 ************************************ 00:05:31.681 END TEST event_scheduler 00:05:31.681 ************************************ 00:05:31.943 23:50:01 -- event/event.sh@51 -- # modprobe -n nbd 00:05:31.943 23:50:01 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:31.943 23:50:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.943 23:50:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.943 23:50:01 -- common/autotest_common.sh@10 -- # set +x 00:05:31.943 ************************************ 00:05:31.943 START TEST app_repeat 00:05:31.943 ************************************ 00:05:31.943 23:50:02 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:31.943 23:50:02 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.943 23:50:02 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.943 23:50:02 -- event/event.sh@13 -- # local nbd_list 00:05:31.943 23:50:02 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.943 23:50:02 -- event/event.sh@14 -- # local bdev_list 00:05:31.943 23:50:02 -- event/event.sh@15 -- # local repeat_times=4 00:05:31.943 23:50:02 -- event/event.sh@17 -- # modprobe nbd 00:05:31.943 23:50:02 -- event/event.sh@19 -- # repeat_pid=190241 00:05:31.943 23:50:02 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.943 23:50:02 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:31.943 23:50:02 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 190241' 00:05:31.943 Process app_repeat pid: 190241 00:05:31.943 23:50:02 -- event/event.sh@23 -- # for i in {0..2} 00:05:31.943 23:50:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:31.943 spdk_app_start Round 0 00:05:31.943 23:50:02 -- event/event.sh@25 -- # waitforlisten 190241 /var/tmp/spdk-nbd.sock 00:05:31.943 23:50:02 -- common/autotest_common.sh@817 -- # '[' -z 190241 ']' 00:05:31.943 23:50:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.943 23:50:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.943 23:50:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.943 23:50:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.943 23:50:02 -- common/autotest_common.sh@10 -- # set +x 00:05:31.943 [2024-04-26 23:50:02.117062] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:31.943 [2024-04-26 23:50:02.117127] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid190241 ] 00:05:31.943 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.204 [2024-04-26 23:50:02.179183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.204 [2024-04-26 23:50:02.245295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.204 [2024-04-26 23:50:02.245301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.777 23:50:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.777 23:50:02 -- common/autotest_common.sh@850 -- # return 0 00:05:32.777 23:50:02 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.039 Malloc0 00:05:33.039 23:50:03 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.039 Malloc1 00:05:33.039 23:50:03 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@12 -- # local i 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.039 23:50:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.301 /dev/nbd0 00:05:33.301 23:50:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.301 23:50:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.301 23:50:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:33.301 23:50:03 -- common/autotest_common.sh@855 -- # local i 00:05:33.301 23:50:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:33.301 23:50:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:33.301 23:50:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:33.301 23:50:03 -- common/autotest_common.sh@859 -- # break 00:05:33.301 23:50:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:33.301 23:50:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:33.301 23:50:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.301 1+0 records in 00:05:33.301 1+0 records out 00:05:33.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271718 s, 15.1 MB/s 00:05:33.301 23:50:03 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.301 23:50:03 -- common/autotest_common.sh@872 -- # size=4096 00:05:33.301 23:50:03 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.301 23:50:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:33.301 23:50:03 -- common/autotest_common.sh@875 -- # return 0 00:05:33.301 23:50:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.301 23:50:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.301 23:50:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.562 /dev/nbd1 00:05:33.562 23:50:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.562 23:50:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.562 23:50:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:33.562 23:50:03 -- common/autotest_common.sh@855 -- # local i 00:05:33.562 23:50:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:33.563 23:50:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:33.563 23:50:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:33.563 23:50:03 -- common/autotest_common.sh@859 -- # break 00:05:33.563 23:50:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:33.563 23:50:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:33.563 23:50:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.563 1+0 records in 00:05:33.563 1+0 records out 00:05:33.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251831 s, 16.3 MB/s 00:05:33.563 23:50:03 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.563 23:50:03 -- common/autotest_common.sh@872 -- # size=4096 00:05:33.563 23:50:03 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.563 23:50:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:33.563 23:50:03 -- common/autotest_common.sh@875 -- # return 0 00:05:33.563 23:50:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.563 23:50:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.563 23:50:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.563 23:50:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.563 23:50:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.563 23:50:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.563 { 00:05:33.563 "nbd_device": "/dev/nbd0", 00:05:33.563 "bdev_name": "Malloc0" 00:05:33.563 }, 00:05:33.563 { 00:05:33.563 "nbd_device": "/dev/nbd1", 00:05:33.563 "bdev_name": "Malloc1" 00:05:33.563 } 00:05:33.563 ]' 00:05:33.563 23:50:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.563 { 00:05:33.563 "nbd_device": "/dev/nbd0", 00:05:33.563 "bdev_name": "Malloc0" 00:05:33.563 }, 00:05:33.563 { 00:05:33.563 "nbd_device": "/dev/nbd1", 00:05:33.563 "bdev_name": "Malloc1" 00:05:33.563 } 00:05:33.563 ]' 00:05:33.563 23:50:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.825 /dev/nbd1' 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.825 /dev/nbd1' 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.825 256+0 records in 00:05:33.825 256+0 records out 00:05:33.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124799 s, 84.0 MB/s 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.825 256+0 records in 00:05:33.825 256+0 records out 00:05:33.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168708 s, 62.2 MB/s 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.825 256+0 records in 00:05:33.825 256+0 records out 00:05:33.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016835 s, 62.3 MB/s 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@51 -- # local i 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.825 23:50:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@41 -- # break 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@41 -- # break 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.086 23:50:04 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@65 -- # true 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.347 23:50:04 -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.347 23:50:04 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.609 23:50:04 -- event/event.sh@35 -- # sleep 3 00:05:34.609 [2024-04-26 23:50:04.715669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.609 [2024-04-26 23:50:04.778315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.609 [2024-04-26 23:50:04.778321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.609 [2024-04-26 23:50:04.810027] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.609 [2024-04-26 23:50:04.810061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.910 23:50:07 -- event/event.sh@23 -- # for i in {0..2} 00:05:37.910 23:50:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:37.910 spdk_app_start Round 1 00:05:37.910 23:50:07 -- event/event.sh@25 -- # waitforlisten 190241 /var/tmp/spdk-nbd.sock 00:05:37.910 23:50:07 -- common/autotest_common.sh@817 -- # '[' -z 190241 ']' 00:05:37.910 23:50:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.910 23:50:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.910 23:50:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.910 23:50:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.910 23:50:07 -- common/autotest_common.sh@10 -- # set +x 00:05:37.910 23:50:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:37.910 23:50:07 -- common/autotest_common.sh@850 -- # return 0 00:05:37.910 23:50:07 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.910 Malloc0 00:05:37.910 23:50:07 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.910 Malloc1 00:05:37.910 23:50:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@12 -- # local i 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.910 23:50:08 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.170 /dev/nbd0 00:05:38.170 23:50:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.170 23:50:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.170 23:50:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:38.170 23:50:08 -- common/autotest_common.sh@855 -- # local i 00:05:38.170 23:50:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:38.170 23:50:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:38.170 23:50:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:38.170 23:50:08 -- common/autotest_common.sh@859 -- # break 00:05:38.170 23:50:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:38.170 23:50:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:38.170 23:50:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.170 1+0 records in 00:05:38.170 1+0 records out 00:05:38.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265476 s, 15.4 MB/s 00:05:38.170 23:50:08 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.170 23:50:08 -- common/autotest_common.sh@872 -- # size=4096 00:05:38.170 23:50:08 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.170 23:50:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:38.170 23:50:08 -- common/autotest_common.sh@875 -- # return 0 00:05:38.170 23:50:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.170 23:50:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.170 23:50:08 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.170 /dev/nbd1 00:05:38.429 23:50:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.429 23:50:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.429 23:50:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:38.429 23:50:08 -- common/autotest_common.sh@855 -- # local i 00:05:38.429 23:50:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:38.429 23:50:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:38.429 23:50:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:38.429 23:50:08 -- common/autotest_common.sh@859 -- # break 00:05:38.429 23:50:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:38.429 23:50:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:38.429 23:50:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.429 1+0 records in 00:05:38.429 1+0 records out 00:05:38.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306603 s, 13.4 MB/s 00:05:38.430 23:50:08 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.430 23:50:08 -- common/autotest_common.sh@872 -- # size=4096 00:05:38.430 23:50:08 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.430 23:50:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:38.430 23:50:08 -- common/autotest_common.sh@875 -- # return 0 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.430 { 00:05:38.430 "nbd_device": "/dev/nbd0", 00:05:38.430 "bdev_name": "Malloc0" 00:05:38.430 }, 00:05:38.430 { 00:05:38.430 "nbd_device": "/dev/nbd1", 00:05:38.430 "bdev_name": "Malloc1" 00:05:38.430 } 00:05:38.430 ]' 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.430 { 00:05:38.430 "nbd_device": "/dev/nbd0", 00:05:38.430 "bdev_name": "Malloc0" 00:05:38.430 }, 00:05:38.430 { 00:05:38.430 "nbd_device": "/dev/nbd1", 00:05:38.430 "bdev_name": "Malloc1" 00:05:38.430 } 00:05:38.430 ]' 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.430 /dev/nbd1' 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.430 /dev/nbd1' 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.430 256+0 records in 00:05:38.430 256+0 records out 00:05:38.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00293995 s, 357 MB/s 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.430 23:50:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.695 256+0 records in 00:05:38.695 256+0 records out 00:05:38.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162447 s, 64.5 MB/s 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.695 256+0 records in 00:05:38.695 256+0 records out 00:05:38.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016284 s, 64.4 MB/s 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@51 -- # local i 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@41 -- # break 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.695 23:50:08 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@41 -- # break 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.999 23:50:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.280 23:50:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.280 23:50:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.280 23:50:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.280 23:50:09 -- bdev/nbd_common.sh@65 -- # true 00:05:39.280 23:50:09 -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.280 23:50:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.280 23:50:09 -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.280 23:50:09 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.280 23:50:09 -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.280 23:50:09 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.280 23:50:09 -- event/event.sh@35 -- # sleep 3 00:05:39.540 [2024-04-26 23:50:09.541464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.540 [2024-04-26 23:50:09.603587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.540 [2024-04-26 23:50:09.603593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.540 [2024-04-26 23:50:09.636150] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.540 [2024-04-26 23:50:09.636186] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.847 23:50:12 -- event/event.sh@23 -- # for i in {0..2} 00:05:42.847 23:50:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:42.847 spdk_app_start Round 2 00:05:42.847 23:50:12 -- event/event.sh@25 -- # waitforlisten 190241 /var/tmp/spdk-nbd.sock 00:05:42.847 23:50:12 -- common/autotest_common.sh@817 -- # '[' -z 190241 ']' 00:05:42.847 23:50:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.847 23:50:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:42.847 23:50:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.847 23:50:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:42.847 23:50:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.847 23:50:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:42.847 23:50:12 -- common/autotest_common.sh@850 -- # return 0 00:05:42.847 23:50:12 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.847 Malloc0 00:05:42.847 23:50:12 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.847 Malloc1 00:05:42.847 23:50:12 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@12 -- # local i 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.847 23:50:12 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.847 /dev/nbd0 00:05:42.847 23:50:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.847 23:50:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.847 23:50:13 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:42.847 23:50:13 -- common/autotest_common.sh@855 -- # local i 00:05:42.847 23:50:13 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:42.847 23:50:13 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:42.847 23:50:13 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:42.847 23:50:13 -- common/autotest_common.sh@859 -- # break 00:05:42.847 23:50:13 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:42.847 23:50:13 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:42.847 23:50:13 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.847 1+0 records in 00:05:42.847 1+0 records out 00:05:42.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268731 s, 15.2 MB/s 00:05:42.847 23:50:13 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.109 23:50:13 -- common/autotest_common.sh@872 -- # size=4096 00:05:43.109 23:50:13 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.109 23:50:13 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:43.109 23:50:13 -- common/autotest_common.sh@875 -- # return 0 00:05:43.109 23:50:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.109 23:50:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.109 23:50:13 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.109 /dev/nbd1 00:05:43.109 23:50:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.109 23:50:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.109 23:50:13 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:43.109 23:50:13 -- common/autotest_common.sh@855 -- # local i 00:05:43.109 23:50:13 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:43.109 23:50:13 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:43.109 23:50:13 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:43.109 23:50:13 -- common/autotest_common.sh@859 -- # break 00:05:43.109 23:50:13 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:43.109 23:50:13 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:43.109 23:50:13 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.109 1+0 records in 00:05:43.109 1+0 records out 00:05:43.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274943 s, 14.9 MB/s 00:05:43.109 23:50:13 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.109 23:50:13 -- common/autotest_common.sh@872 -- # size=4096 00:05:43.109 23:50:13 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.109 23:50:13 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:43.109 23:50:13 -- common/autotest_common.sh@875 -- # return 0 00:05:43.109 23:50:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.109 23:50:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.109 23:50:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.109 23:50:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.109 23:50:13 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.370 23:50:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.370 { 00:05:43.370 "nbd_device": "/dev/nbd0", 00:05:43.370 "bdev_name": "Malloc0" 00:05:43.370 }, 00:05:43.370 { 00:05:43.370 "nbd_device": "/dev/nbd1", 00:05:43.370 "bdev_name": "Malloc1" 00:05:43.370 } 00:05:43.370 ]' 00:05:43.370 23:50:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.370 { 00:05:43.370 "nbd_device": "/dev/nbd0", 00:05:43.370 "bdev_name": "Malloc0" 00:05:43.370 }, 00:05:43.370 { 00:05:43.370 "nbd_device": "/dev/nbd1", 00:05:43.370 "bdev_name": "Malloc1" 00:05:43.370 } 00:05:43.370 ]' 00:05:43.370 23:50:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.370 23:50:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.370 /dev/nbd1' 00:05:43.370 23:50:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.370 23:50:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.370 /dev/nbd1' 00:05:43.370 23:50:13 -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.370 23:50:13 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.370 23:50:13 -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.371 256+0 records in 00:05:43.371 256+0 records out 00:05:43.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114112 s, 91.9 MB/s 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.371 256+0 records in 00:05:43.371 256+0 records out 00:05:43.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158707 s, 66.1 MB/s 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.371 256+0 records in 00:05:43.371 256+0 records out 00:05:43.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164292 s, 63.8 MB/s 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@51 -- # local i 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.371 23:50:13 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.632 23:50:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.632 23:50:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.632 23:50:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.632 23:50:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.632 23:50:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.632 23:50:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.632 23:50:13 -- bdev/nbd_common.sh@41 -- # break 00:05:43.632 23:50:13 -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.632 23:50:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.632 23:50:13 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.893 23:50:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.893 23:50:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.893 23:50:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.893 23:50:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.893 23:50:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.893 23:50:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.893 23:50:13 -- bdev/nbd_common.sh@41 -- # break 00:05:43.893 23:50:13 -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.893 23:50:13 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.893 23:50:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.893 23:50:13 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@65 -- # true 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.893 23:50:14 -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.893 23:50:14 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.154 23:50:14 -- event/event.sh@35 -- # sleep 3 00:05:44.416 [2024-04-26 23:50:14.383522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.416 [2024-04-26 23:50:14.446586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.416 [2024-04-26 23:50:14.446591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.416 [2024-04-26 23:50:14.478422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.416 [2024-04-26 23:50:14.478455] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.726 23:50:17 -- event/event.sh@38 -- # waitforlisten 190241 /var/tmp/spdk-nbd.sock 00:05:47.726 23:50:17 -- common/autotest_common.sh@817 -- # '[' -z 190241 ']' 00:05:47.726 23:50:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.726 23:50:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.726 23:50:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.726 23:50:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.726 23:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.726 23:50:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.726 23:50:17 -- common/autotest_common.sh@850 -- # return 0 00:05:47.726 23:50:17 -- event/event.sh@39 -- # killprocess 190241 00:05:47.726 23:50:17 -- common/autotest_common.sh@936 -- # '[' -z 190241 ']' 00:05:47.726 23:50:17 -- common/autotest_common.sh@940 -- # kill -0 190241 00:05:47.726 23:50:17 -- common/autotest_common.sh@941 -- # uname 00:05:47.726 23:50:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.726 23:50:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 190241 00:05:47.726 23:50:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:47.726 23:50:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:47.726 23:50:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 190241' 00:05:47.726 killing process with pid 190241 00:05:47.726 23:50:17 -- common/autotest_common.sh@955 -- # kill 190241 00:05:47.726 23:50:17 -- common/autotest_common.sh@960 -- # wait 190241 00:05:47.726 spdk_app_start is called in Round 0. 00:05:47.726 Shutdown signal received, stop current app iteration 00:05:47.726 Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 reinitialization... 00:05:47.726 spdk_app_start is called in Round 1. 00:05:47.726 Shutdown signal received, stop current app iteration 00:05:47.726 Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 reinitialization... 00:05:47.726 spdk_app_start is called in Round 2. 00:05:47.726 Shutdown signal received, stop current app iteration 00:05:47.726 Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 reinitialization... 00:05:47.726 spdk_app_start is called in Round 3. 00:05:47.726 Shutdown signal received, stop current app iteration 00:05:47.726 23:50:17 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:47.726 23:50:17 -- event/event.sh@42 -- # return 0 00:05:47.726 00:05:47.726 real 0m15.493s 00:05:47.726 user 0m33.385s 00:05:47.726 sys 0m2.042s 00:05:47.726 23:50:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.726 23:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.726 ************************************ 00:05:47.726 END TEST app_repeat 00:05:47.726 ************************************ 00:05:47.726 23:50:17 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:47.726 23:50:17 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:47.726 23:50:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.726 23:50:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.726 23:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.726 ************************************ 00:05:47.726 START TEST cpu_locks 00:05:47.726 ************************************ 00:05:47.726 23:50:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:47.726 * Looking for test storage... 00:05:47.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:47.726 23:50:17 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:47.726 23:50:17 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:47.726 23:50:17 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:47.726 23:50:17 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:47.726 23:50:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.726 23:50:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.726 23:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.989 ************************************ 00:05:47.989 START TEST default_locks 00:05:47.989 ************************************ 00:05:47.989 23:50:18 -- common/autotest_common.sh@1111 -- # default_locks 00:05:47.989 23:50:18 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=193760 00:05:47.989 23:50:18 -- event/cpu_locks.sh@47 -- # waitforlisten 193760 00:05:47.989 23:50:18 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.989 23:50:18 -- common/autotest_common.sh@817 -- # '[' -z 193760 ']' 00:05:47.989 23:50:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.989 23:50:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.989 23:50:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.989 23:50:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.989 23:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:47.989 [2024-04-26 23:50:18.068551] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:47.989 [2024-04-26 23:50:18.068603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193760 ] 00:05:47.989 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.989 [2024-04-26 23:50:18.129849] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.989 [2024-04-26 23:50:18.198232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.934 23:50:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.934 23:50:18 -- common/autotest_common.sh@850 -- # return 0 00:05:48.934 23:50:18 -- event/cpu_locks.sh@49 -- # locks_exist 193760 00:05:48.934 23:50:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.934 23:50:18 -- event/cpu_locks.sh@22 -- # lslocks -p 193760 00:05:49.196 lslocks: write error 00:05:49.196 23:50:19 -- event/cpu_locks.sh@50 -- # killprocess 193760 00:05:49.196 23:50:19 -- common/autotest_common.sh@936 -- # '[' -z 193760 ']' 00:05:49.196 23:50:19 -- common/autotest_common.sh@940 -- # kill -0 193760 00:05:49.196 23:50:19 -- common/autotest_common.sh@941 -- # uname 00:05:49.196 23:50:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.196 23:50:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 193760 00:05:49.196 23:50:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.196 23:50:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.196 23:50:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 193760' 00:05:49.196 killing process with pid 193760 00:05:49.196 23:50:19 -- common/autotest_common.sh@955 -- # kill 193760 00:05:49.196 23:50:19 -- common/autotest_common.sh@960 -- # wait 193760 00:05:49.459 23:50:19 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 193760 00:05:49.459 23:50:19 -- common/autotest_common.sh@638 -- # local es=0 00:05:49.459 23:50:19 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 193760 00:05:49.459 23:50:19 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:49.459 23:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:49.459 23:50:19 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:49.459 23:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:49.459 23:50:19 -- common/autotest_common.sh@641 -- # waitforlisten 193760 00:05:49.459 23:50:19 -- common/autotest_common.sh@817 -- # '[' -z 193760 ']' 00:05:49.459 23:50:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.459 23:50:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.459 23:50:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.459 23:50:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.459 23:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:49.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (193760) - No such process 00:05:49.459 ERROR: process (pid: 193760) is no longer running 00:05:49.459 23:50:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:49.459 23:50:19 -- common/autotest_common.sh@850 -- # return 1 00:05:49.459 23:50:19 -- common/autotest_common.sh@641 -- # es=1 00:05:49.459 23:50:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:49.459 23:50:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:49.459 23:50:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:49.459 23:50:19 -- event/cpu_locks.sh@54 -- # no_locks 00:05:49.459 23:50:19 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.459 23:50:19 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.459 23:50:19 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.459 00:05:49.459 real 0m1.535s 00:05:49.459 user 0m1.609s 00:05:49.459 sys 0m0.540s 00:05:49.459 23:50:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.459 23:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:49.459 ************************************ 00:05:49.459 END TEST default_locks 00:05:49.459 ************************************ 00:05:49.459 23:50:19 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:49.459 23:50:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.459 23:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.459 23:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:49.720 ************************************ 00:05:49.720 START TEST default_locks_via_rpc 00:05:49.720 ************************************ 00:05:49.720 23:50:19 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:49.720 23:50:19 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=194150 00:05:49.720 23:50:19 -- event/cpu_locks.sh@63 -- # waitforlisten 194150 00:05:49.720 23:50:19 -- common/autotest_common.sh@817 -- # '[' -z 194150 ']' 00:05:49.720 23:50:19 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.720 23:50:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.720 23:50:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.720 23:50:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.720 23:50:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.720 23:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:49.720 [2024-04-26 23:50:19.780884] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:49.720 [2024-04-26 23:50:19.780943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid194150 ] 00:05:49.720 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.720 [2024-04-26 23:50:19.849090] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.720 [2024-04-26 23:50:19.923766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.662 23:50:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.662 23:50:20 -- common/autotest_common.sh@850 -- # return 0 00:05:50.662 23:50:20 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:50.662 23:50:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.662 23:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.662 23:50:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:50.662 23:50:20 -- event/cpu_locks.sh@67 -- # no_locks 00:05:50.662 23:50:20 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.662 23:50:20 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.662 23:50:20 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.662 23:50:20 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:50.662 23:50:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.662 23:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.662 23:50:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:50.662 23:50:20 -- event/cpu_locks.sh@71 -- # locks_exist 194150 00:05:50.662 23:50:20 -- event/cpu_locks.sh@22 -- # lslocks -p 194150 00:05:50.662 23:50:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.924 23:50:20 -- event/cpu_locks.sh@73 -- # killprocess 194150 00:05:50.924 23:50:20 -- common/autotest_common.sh@936 -- # '[' -z 194150 ']' 00:05:50.924 23:50:20 -- common/autotest_common.sh@940 -- # kill -0 194150 00:05:50.924 23:50:20 -- common/autotest_common.sh@941 -- # uname 00:05:50.924 23:50:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.924 23:50:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 194150 00:05:50.924 23:50:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.924 23:50:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.924 23:50:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 194150' 00:05:50.924 killing process with pid 194150 00:05:50.924 23:50:20 -- common/autotest_common.sh@955 -- # kill 194150 00:05:50.924 23:50:20 -- common/autotest_common.sh@960 -- # wait 194150 00:05:51.186 00:05:51.186 real 0m1.436s 00:05:51.186 user 0m1.532s 00:05:51.186 sys 0m0.472s 00:05:51.186 23:50:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.186 23:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:51.186 ************************************ 00:05:51.186 END TEST default_locks_via_rpc 00:05:51.186 ************************************ 00:05:51.186 23:50:21 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:51.186 23:50:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.186 23:50:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.186 23:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:51.186 ************************************ 00:05:51.186 START TEST non_locking_app_on_locked_coremask 00:05:51.186 ************************************ 00:05:51.186 23:50:21 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:51.186 23:50:21 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=194513 00:05:51.186 23:50:21 -- event/cpu_locks.sh@81 -- # waitforlisten 194513 /var/tmp/spdk.sock 00:05:51.186 23:50:21 -- common/autotest_common.sh@817 -- # '[' -z 194513 ']' 00:05:51.186 23:50:21 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.186 23:50:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.186 23:50:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:51.186 23:50:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.186 23:50:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:51.186 23:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:51.186 [2024-04-26 23:50:21.402395] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:51.186 [2024-04-26 23:50:21.402444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid194513 ] 00:05:51.447 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.447 [2024-04-26 23:50:21.463929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.447 [2024-04-26 23:50:21.533843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.020 23:50:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.020 23:50:22 -- common/autotest_common.sh@850 -- # return 0 00:05:52.020 23:50:22 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=194594 00:05:52.020 23:50:22 -- event/cpu_locks.sh@85 -- # waitforlisten 194594 /var/tmp/spdk2.sock 00:05:52.020 23:50:22 -- common/autotest_common.sh@817 -- # '[' -z 194594 ']' 00:05:52.020 23:50:22 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:52.020 23:50:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.020 23:50:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:52.020 23:50:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.020 23:50:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:52.020 23:50:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.020 [2024-04-26 23:50:22.233318] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:52.020 [2024-04-26 23:50:22.233369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid194594 ] 00:05:52.282 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.282 [2024-04-26 23:50:22.321212] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.282 [2024-04-26 23:50:22.321237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.282 [2024-04-26 23:50:22.450630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.854 23:50:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.854 23:50:22 -- common/autotest_common.sh@850 -- # return 0 00:05:52.854 23:50:22 -- event/cpu_locks.sh@87 -- # locks_exist 194513 00:05:52.854 23:50:22 -- event/cpu_locks.sh@22 -- # lslocks -p 194513 00:05:52.854 23:50:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.424 lslocks: write error 00:05:53.424 23:50:23 -- event/cpu_locks.sh@89 -- # killprocess 194513 00:05:53.424 23:50:23 -- common/autotest_common.sh@936 -- # '[' -z 194513 ']' 00:05:53.424 23:50:23 -- common/autotest_common.sh@940 -- # kill -0 194513 00:05:53.424 23:50:23 -- common/autotest_common.sh@941 -- # uname 00:05:53.424 23:50:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.424 23:50:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 194513 00:05:53.424 23:50:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.424 23:50:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.424 23:50:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 194513' 00:05:53.424 killing process with pid 194513 00:05:53.424 23:50:23 -- common/autotest_common.sh@955 -- # kill 194513 00:05:53.424 23:50:23 -- common/autotest_common.sh@960 -- # wait 194513 00:05:53.995 23:50:23 -- event/cpu_locks.sh@90 -- # killprocess 194594 00:05:53.995 23:50:23 -- common/autotest_common.sh@936 -- # '[' -z 194594 ']' 00:05:53.995 23:50:23 -- common/autotest_common.sh@940 -- # kill -0 194594 00:05:53.995 23:50:23 -- common/autotest_common.sh@941 -- # uname 00:05:53.995 23:50:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.995 23:50:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 194594 00:05:53.995 23:50:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.995 23:50:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.995 23:50:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 194594' 00:05:53.995 killing process with pid 194594 00:05:53.995 23:50:23 -- common/autotest_common.sh@955 -- # kill 194594 00:05:53.995 23:50:23 -- common/autotest_common.sh@960 -- # wait 194594 00:05:53.995 00:05:53.995 real 0m2.831s 00:05:53.995 user 0m3.078s 00:05:53.995 sys 0m0.844s 00:05:53.995 23:50:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.995 23:50:24 -- common/autotest_common.sh@10 -- # set +x 00:05:53.995 ************************************ 00:05:53.995 END TEST non_locking_app_on_locked_coremask 00:05:53.995 ************************************ 00:05:54.253 23:50:24 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:54.253 23:50:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.253 23:50:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.253 23:50:24 -- common/autotest_common.sh@10 -- # set +x 00:05:54.253 ************************************ 00:05:54.253 START TEST locking_app_on_unlocked_coremask 00:05:54.253 ************************************ 00:05:54.253 23:50:24 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:54.253 23:50:24 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=195070 00:05:54.253 23:50:24 -- event/cpu_locks.sh@99 -- # waitforlisten 195070 /var/tmp/spdk.sock 00:05:54.253 23:50:24 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:54.253 23:50:24 -- common/autotest_common.sh@817 -- # '[' -z 195070 ']' 00:05:54.253 23:50:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.254 23:50:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.254 23:50:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.254 23:50:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.254 23:50:24 -- common/autotest_common.sh@10 -- # set +x 00:05:54.254 [2024-04-26 23:50:24.410255] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:54.254 [2024-04-26 23:50:24.410304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid195070 ] 00:05:54.254 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.254 [2024-04-26 23:50:24.469650] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.254 [2024-04-26 23:50:24.469677] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.514 [2024-04-26 23:50:24.533955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.084 23:50:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.084 23:50:25 -- common/autotest_common.sh@850 -- # return 0 00:05:55.084 23:50:25 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=195307 00:05:55.084 23:50:25 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.084 23:50:25 -- event/cpu_locks.sh@103 -- # waitforlisten 195307 /var/tmp/spdk2.sock 00:05:55.084 23:50:25 -- common/autotest_common.sh@817 -- # '[' -z 195307 ']' 00:05:55.084 23:50:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.084 23:50:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:55.084 23:50:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.084 23:50:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:55.084 23:50:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.084 [2024-04-26 23:50:25.202491] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:55.084 [2024-04-26 23:50:25.202540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid195307 ] 00:05:55.084 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.084 [2024-04-26 23:50:25.292511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.344 [2024-04-26 23:50:25.425408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.914 23:50:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.914 23:50:25 -- common/autotest_common.sh@850 -- # return 0 00:05:55.914 23:50:25 -- event/cpu_locks.sh@105 -- # locks_exist 195307 00:05:55.914 23:50:25 -- event/cpu_locks.sh@22 -- # lslocks -p 195307 00:05:55.914 23:50:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.483 lslocks: write error 00:05:56.483 23:50:26 -- event/cpu_locks.sh@107 -- # killprocess 195070 00:05:56.483 23:50:26 -- common/autotest_common.sh@936 -- # '[' -z 195070 ']' 00:05:56.483 23:50:26 -- common/autotest_common.sh@940 -- # kill -0 195070 00:05:56.483 23:50:26 -- common/autotest_common.sh@941 -- # uname 00:05:56.483 23:50:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.483 23:50:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 195070 00:05:56.483 23:50:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.483 23:50:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.483 23:50:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 195070' 00:05:56.483 killing process with pid 195070 00:05:56.483 23:50:26 -- common/autotest_common.sh@955 -- # kill 195070 00:05:56.483 23:50:26 -- common/autotest_common.sh@960 -- # wait 195070 00:05:56.744 23:50:26 -- event/cpu_locks.sh@108 -- # killprocess 195307 00:05:56.744 23:50:26 -- common/autotest_common.sh@936 -- # '[' -z 195307 ']' 00:05:56.744 23:50:26 -- common/autotest_common.sh@940 -- # kill -0 195307 00:05:56.744 23:50:26 -- common/autotest_common.sh@941 -- # uname 00:05:56.744 23:50:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.744 23:50:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 195307 00:05:56.745 23:50:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.745 23:50:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.745 23:50:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 195307' 00:05:56.745 killing process with pid 195307 00:05:56.745 23:50:26 -- common/autotest_common.sh@955 -- # kill 195307 00:05:56.745 23:50:26 -- common/autotest_common.sh@960 -- # wait 195307 00:05:57.005 00:05:57.005 real 0m2.802s 00:05:57.005 user 0m3.041s 00:05:57.005 sys 0m0.818s 00:05:57.005 23:50:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.005 23:50:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.005 ************************************ 00:05:57.005 END TEST locking_app_on_unlocked_coremask 00:05:57.005 ************************************ 00:05:57.005 23:50:27 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:57.005 23:50:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.005 23:50:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.005 23:50:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.266 ************************************ 00:05:57.266 START TEST locking_app_on_locked_coremask 00:05:57.266 ************************************ 00:05:57.266 23:50:27 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:57.266 23:50:27 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=195687 00:05:57.266 23:50:27 -- event/cpu_locks.sh@116 -- # waitforlisten 195687 /var/tmp/spdk.sock 00:05:57.266 23:50:27 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.266 23:50:27 -- common/autotest_common.sh@817 -- # '[' -z 195687 ']' 00:05:57.266 23:50:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.266 23:50:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.266 23:50:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.266 23:50:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.266 23:50:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.266 [2024-04-26 23:50:27.389923] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:57.266 [2024-04-26 23:50:27.389967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid195687 ] 00:05:57.266 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.266 [2024-04-26 23:50:27.449205] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.527 [2024-04-26 23:50:27.512097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.097 23:50:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.097 23:50:28 -- common/autotest_common.sh@850 -- # return 0 00:05:58.097 23:50:28 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=196021 00:05:58.098 23:50:28 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 196021 /var/tmp/spdk2.sock 00:05:58.098 23:50:28 -- common/autotest_common.sh@638 -- # local es=0 00:05:58.098 23:50:28 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.098 23:50:28 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 196021 /var/tmp/spdk2.sock 00:05:58.098 23:50:28 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:58.098 23:50:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:58.098 23:50:28 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:58.098 23:50:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:58.098 23:50:28 -- common/autotest_common.sh@641 -- # waitforlisten 196021 /var/tmp/spdk2.sock 00:05:58.098 23:50:28 -- common/autotest_common.sh@817 -- # '[' -z 196021 ']' 00:05:58.098 23:50:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.098 23:50:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.098 23:50:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.098 23:50:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.098 23:50:28 -- common/autotest_common.sh@10 -- # set +x 00:05:58.098 [2024-04-26 23:50:28.199287] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:58.098 [2024-04-26 23:50:28.199340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196021 ] 00:05:58.098 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.098 [2024-04-26 23:50:28.288377] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 195687 has claimed it. 00:05:58.098 [2024-04-26 23:50:28.288418] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (196021) - No such process 00:05:58.667 ERROR: process (pid: 196021) is no longer running 00:05:58.667 23:50:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.667 23:50:28 -- common/autotest_common.sh@850 -- # return 1 00:05:58.667 23:50:28 -- common/autotest_common.sh@641 -- # es=1 00:05:58.667 23:50:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:58.667 23:50:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:58.667 23:50:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:58.667 23:50:28 -- event/cpu_locks.sh@122 -- # locks_exist 195687 00:05:58.667 23:50:28 -- event/cpu_locks.sh@22 -- # lslocks -p 195687 00:05:58.667 23:50:28 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.238 lslocks: write error 00:05:59.238 23:50:29 -- event/cpu_locks.sh@124 -- # killprocess 195687 00:05:59.238 23:50:29 -- common/autotest_common.sh@936 -- # '[' -z 195687 ']' 00:05:59.238 23:50:29 -- common/autotest_common.sh@940 -- # kill -0 195687 00:05:59.238 23:50:29 -- common/autotest_common.sh@941 -- # uname 00:05:59.238 23:50:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.238 23:50:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 195687 00:05:59.238 23:50:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.238 23:50:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.238 23:50:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 195687' 00:05:59.238 killing process with pid 195687 00:05:59.238 23:50:29 -- common/autotest_common.sh@955 -- # kill 195687 00:05:59.238 23:50:29 -- common/autotest_common.sh@960 -- # wait 195687 00:05:59.499 00:05:59.499 real 0m2.123s 00:05:59.499 user 0m2.335s 00:05:59.499 sys 0m0.600s 00:05:59.499 23:50:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.499 23:50:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.499 ************************************ 00:05:59.499 END TEST locking_app_on_locked_coremask 00:05:59.499 ************************************ 00:05:59.499 23:50:29 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:59.499 23:50:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.499 23:50:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.499 23:50:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.499 ************************************ 00:05:59.499 START TEST locking_overlapped_coremask 00:05:59.499 ************************************ 00:05:59.499 23:50:29 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:59.499 23:50:29 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=196325 00:05:59.499 23:50:29 -- event/cpu_locks.sh@133 -- # waitforlisten 196325 /var/tmp/spdk.sock 00:05:59.499 23:50:29 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:59.499 23:50:29 -- common/autotest_common.sh@817 -- # '[' -z 196325 ']' 00:05:59.499 23:50:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.499 23:50:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.499 23:50:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.499 23:50:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.499 23:50:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.499 [2024-04-26 23:50:29.710809] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:05:59.499 [2024-04-26 23:50:29.710877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196325 ] 00:05:59.760 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.760 [2024-04-26 23:50:29.777032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.760 [2024-04-26 23:50:29.852387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.760 [2024-04-26 23:50:29.852512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.760 [2024-04-26 23:50:29.852515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.331 23:50:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.331 23:50:30 -- common/autotest_common.sh@850 -- # return 0 00:06:00.331 23:50:30 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=196402 00:06:00.331 23:50:30 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 196402 /var/tmp/spdk2.sock 00:06:00.331 23:50:30 -- common/autotest_common.sh@638 -- # local es=0 00:06:00.331 23:50:30 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:00.331 23:50:30 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 196402 /var/tmp/spdk2.sock 00:06:00.331 23:50:30 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:00.331 23:50:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.331 23:50:30 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:00.331 23:50:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.331 23:50:30 -- common/autotest_common.sh@641 -- # waitforlisten 196402 /var/tmp/spdk2.sock 00:06:00.331 23:50:30 -- common/autotest_common.sh@817 -- # '[' -z 196402 ']' 00:06:00.331 23:50:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.331 23:50:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.331 23:50:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.331 23:50:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.331 23:50:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.331 [2024-04-26 23:50:30.537985] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:00.331 [2024-04-26 23:50:30.538033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196402 ] 00:06:00.592 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.592 [2024-04-26 23:50:30.611820] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 196325 has claimed it. 00:06:00.592 [2024-04-26 23:50:30.611854] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (196402) - No such process 00:06:01.163 ERROR: process (pid: 196402) is no longer running 00:06:01.163 23:50:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:01.163 23:50:31 -- common/autotest_common.sh@850 -- # return 1 00:06:01.163 23:50:31 -- common/autotest_common.sh@641 -- # es=1 00:06:01.163 23:50:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:01.163 23:50:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:01.163 23:50:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:01.163 23:50:31 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:01.163 23:50:31 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.163 23:50:31 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.163 23:50:31 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.163 23:50:31 -- event/cpu_locks.sh@141 -- # killprocess 196325 00:06:01.163 23:50:31 -- common/autotest_common.sh@936 -- # '[' -z 196325 ']' 00:06:01.163 23:50:31 -- common/autotest_common.sh@940 -- # kill -0 196325 00:06:01.163 23:50:31 -- common/autotest_common.sh@941 -- # uname 00:06:01.163 23:50:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.163 23:50:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 196325 00:06:01.163 23:50:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.163 23:50:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.163 23:50:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 196325' 00:06:01.163 killing process with pid 196325 00:06:01.163 23:50:31 -- common/autotest_common.sh@955 -- # kill 196325 00:06:01.163 23:50:31 -- common/autotest_common.sh@960 -- # wait 196325 00:06:01.423 00:06:01.423 real 0m1.755s 00:06:01.423 user 0m4.940s 00:06:01.423 sys 0m0.382s 00:06:01.423 23:50:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.423 23:50:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.423 ************************************ 00:06:01.423 END TEST locking_overlapped_coremask 00:06:01.423 ************************************ 00:06:01.423 23:50:31 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:01.423 23:50:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.423 23:50:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.423 23:50:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.423 ************************************ 00:06:01.423 START TEST locking_overlapped_coremask_via_rpc 00:06:01.423 ************************************ 00:06:01.423 23:50:31 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:01.423 23:50:31 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=196769 00:06:01.423 23:50:31 -- event/cpu_locks.sh@149 -- # waitforlisten 196769 /var/tmp/spdk.sock 00:06:01.423 23:50:31 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:01.423 23:50:31 -- common/autotest_common.sh@817 -- # '[' -z 196769 ']' 00:06:01.423 23:50:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.423 23:50:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:01.423 23:50:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.423 23:50:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:01.423 23:50:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.684 [2024-04-26 23:50:31.651004] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:01.684 [2024-04-26 23:50:31.651048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196769 ] 00:06:01.684 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.684 [2024-04-26 23:50:31.710378] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.684 [2024-04-26 23:50:31.710404] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.684 [2024-04-26 23:50:31.776372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.684 [2024-04-26 23:50:31.776518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.684 [2024-04-26 23:50:31.776521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.256 23:50:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.256 23:50:32 -- common/autotest_common.sh@850 -- # return 0 00:06:02.256 23:50:32 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=196790 00:06:02.256 23:50:32 -- event/cpu_locks.sh@153 -- # waitforlisten 196790 /var/tmp/spdk2.sock 00:06:02.256 23:50:32 -- common/autotest_common.sh@817 -- # '[' -z 196790 ']' 00:06:02.256 23:50:32 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:02.256 23:50:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.256 23:50:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.256 23:50:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.256 23:50:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.256 23:50:32 -- common/autotest_common.sh@10 -- # set +x 00:06:02.256 [2024-04-26 23:50:32.473380] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:02.256 [2024-04-26 23:50:32.473428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196790 ] 00:06:02.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.517 [2024-04-26 23:50:32.545681] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.517 [2024-04-26 23:50:32.545703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.517 [2024-04-26 23:50:32.650804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.517 [2024-04-26 23:50:32.653961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.517 [2024-04-26 23:50:32.653963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:03.089 23:50:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.090 23:50:33 -- common/autotest_common.sh@850 -- # return 0 00:06:03.090 23:50:33 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.090 23:50:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:03.090 23:50:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.090 23:50:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:03.090 23:50:33 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.090 23:50:33 -- common/autotest_common.sh@638 -- # local es=0 00:06:03.090 23:50:33 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.090 23:50:33 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:03.090 23:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.090 23:50:33 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:03.090 23:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.090 23:50:33 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.090 23:50:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:03.090 23:50:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.090 [2024-04-26 23:50:33.249896] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 196769 has claimed it. 00:06:03.090 request: 00:06:03.090 { 00:06:03.090 "method": "framework_enable_cpumask_locks", 00:06:03.090 "req_id": 1 00:06:03.090 } 00:06:03.090 Got JSON-RPC error response 00:06:03.090 response: 00:06:03.090 { 00:06:03.090 "code": -32603, 00:06:03.090 "message": "Failed to claim CPU core: 2" 00:06:03.090 } 00:06:03.090 23:50:33 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:03.090 23:50:33 -- common/autotest_common.sh@641 -- # es=1 00:06:03.090 23:50:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:03.090 23:50:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:03.090 23:50:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:03.090 23:50:33 -- event/cpu_locks.sh@158 -- # waitforlisten 196769 /var/tmp/spdk.sock 00:06:03.090 23:50:33 -- common/autotest_common.sh@817 -- # '[' -z 196769 ']' 00:06:03.090 23:50:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.090 23:50:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:03.090 23:50:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.090 23:50:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:03.090 23:50:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.351 23:50:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.351 23:50:33 -- common/autotest_common.sh@850 -- # return 0 00:06:03.351 23:50:33 -- event/cpu_locks.sh@159 -- # waitforlisten 196790 /var/tmp/spdk2.sock 00:06:03.351 23:50:33 -- common/autotest_common.sh@817 -- # '[' -z 196790 ']' 00:06:03.351 23:50:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.351 23:50:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:03.351 23:50:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.351 23:50:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:03.351 23:50:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.612 23:50:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.612 23:50:33 -- common/autotest_common.sh@850 -- # return 0 00:06:03.612 23:50:33 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:03.612 23:50:33 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.612 23:50:33 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.612 23:50:33 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.612 00:06:03.612 real 0m1.993s 00:06:03.612 user 0m0.755s 00:06:03.612 sys 0m0.170s 00:06:03.612 23:50:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:03.612 23:50:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.612 ************************************ 00:06:03.612 END TEST locking_overlapped_coremask_via_rpc 00:06:03.612 ************************************ 00:06:03.612 23:50:33 -- event/cpu_locks.sh@174 -- # cleanup 00:06:03.612 23:50:33 -- event/cpu_locks.sh@15 -- # [[ -z 196769 ]] 00:06:03.612 23:50:33 -- event/cpu_locks.sh@15 -- # killprocess 196769 00:06:03.612 23:50:33 -- common/autotest_common.sh@936 -- # '[' -z 196769 ']' 00:06:03.612 23:50:33 -- common/autotest_common.sh@940 -- # kill -0 196769 00:06:03.612 23:50:33 -- common/autotest_common.sh@941 -- # uname 00:06:03.612 23:50:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.612 23:50:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 196769 00:06:03.612 23:50:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:03.612 23:50:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:03.613 23:50:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 196769' 00:06:03.613 killing process with pid 196769 00:06:03.613 23:50:33 -- common/autotest_common.sh@955 -- # kill 196769 00:06:03.613 23:50:33 -- common/autotest_common.sh@960 -- # wait 196769 00:06:03.874 23:50:33 -- event/cpu_locks.sh@16 -- # [[ -z 196790 ]] 00:06:03.874 23:50:33 -- event/cpu_locks.sh@16 -- # killprocess 196790 00:06:03.874 23:50:33 -- common/autotest_common.sh@936 -- # '[' -z 196790 ']' 00:06:03.874 23:50:33 -- common/autotest_common.sh@940 -- # kill -0 196790 00:06:03.874 23:50:33 -- common/autotest_common.sh@941 -- # uname 00:06:03.874 23:50:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.874 23:50:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 196790 00:06:03.874 23:50:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:03.874 23:50:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:03.874 23:50:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 196790' 00:06:03.874 killing process with pid 196790 00:06:03.874 23:50:33 -- common/autotest_common.sh@955 -- # kill 196790 00:06:03.874 23:50:33 -- common/autotest_common.sh@960 -- # wait 196790 00:06:04.135 23:50:34 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.135 23:50:34 -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.135 23:50:34 -- event/cpu_locks.sh@15 -- # [[ -z 196769 ]] 00:06:04.135 23:50:34 -- event/cpu_locks.sh@15 -- # killprocess 196769 00:06:04.135 23:50:34 -- common/autotest_common.sh@936 -- # '[' -z 196769 ']' 00:06:04.135 23:50:34 -- common/autotest_common.sh@940 -- # kill -0 196769 00:06:04.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (196769) - No such process 00:06:04.135 23:50:34 -- common/autotest_common.sh@963 -- # echo 'Process with pid 196769 is not found' 00:06:04.135 Process with pid 196769 is not found 00:06:04.135 23:50:34 -- event/cpu_locks.sh@16 -- # [[ -z 196790 ]] 00:06:04.135 23:50:34 -- event/cpu_locks.sh@16 -- # killprocess 196790 00:06:04.135 23:50:34 -- common/autotest_common.sh@936 -- # '[' -z 196790 ']' 00:06:04.135 23:50:34 -- common/autotest_common.sh@940 -- # kill -0 196790 00:06:04.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (196790) - No such process 00:06:04.135 23:50:34 -- common/autotest_common.sh@963 -- # echo 'Process with pid 196790 is not found' 00:06:04.135 Process with pid 196790 is not found 00:06:04.135 23:50:34 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.135 00:06:04.135 real 0m16.389s 00:06:04.135 user 0m27.122s 00:06:04.135 sys 0m5.042s 00:06:04.135 23:50:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.135 23:50:34 -- common/autotest_common.sh@10 -- # set +x 00:06:04.135 ************************************ 00:06:04.135 END TEST cpu_locks 00:06:04.135 ************************************ 00:06:04.135 00:06:04.135 real 0m43.708s 00:06:04.135 user 1m20.818s 00:06:04.135 sys 0m8.456s 00:06:04.135 23:50:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.135 23:50:34 -- common/autotest_common.sh@10 -- # set +x 00:06:04.135 ************************************ 00:06:04.135 END TEST event 00:06:04.135 ************************************ 00:06:04.135 23:50:34 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:04.135 23:50:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.135 23:50:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.135 23:50:34 -- common/autotest_common.sh@10 -- # set +x 00:06:04.397 ************************************ 00:06:04.397 START TEST thread 00:06:04.397 ************************************ 00:06:04.397 23:50:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:04.397 * Looking for test storage... 00:06:04.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:04.397 23:50:34 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.397 23:50:34 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:04.397 23:50:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.397 23:50:34 -- common/autotest_common.sh@10 -- # set +x 00:06:04.657 ************************************ 00:06:04.657 START TEST thread_poller_perf 00:06:04.657 ************************************ 00:06:04.657 23:50:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.657 [2024-04-26 23:50:34.669608] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:04.657 [2024-04-26 23:50:34.669701] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197449 ] 00:06:04.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.657 [2024-04-26 23:50:34.739682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.657 [2024-04-26 23:50:34.814671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.657 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.041 ====================================== 00:06:06.041 busy:2406894782 (cyc) 00:06:06.041 total_run_count: 286000 00:06:06.041 tsc_hz: 2400000000 (cyc) 00:06:06.041 ====================================== 00:06:06.041 poller_cost: 8415 (cyc), 3506 (nsec) 00:06:06.041 00:06:06.041 real 0m1.228s 00:06:06.041 user 0m1.149s 00:06:06.041 sys 0m0.075s 00:06:06.041 23:50:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.041 23:50:35 -- common/autotest_common.sh@10 -- # set +x 00:06:06.041 ************************************ 00:06:06.041 END TEST thread_poller_perf 00:06:06.041 ************************************ 00:06:06.041 23:50:35 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.041 23:50:35 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:06.041 23:50:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.041 23:50:35 -- common/autotest_common.sh@10 -- # set +x 00:06:06.041 ************************************ 00:06:06.041 START TEST thread_poller_perf 00:06:06.041 ************************************ 00:06:06.041 23:50:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.041 [2024-04-26 23:50:36.090731] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:06.041 [2024-04-26 23:50:36.090829] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197678 ] 00:06:06.041 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.041 [2024-04-26 23:50:36.158281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.041 [2024-04-26 23:50:36.233371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.041 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.484 ====================================== 00:06:07.484 busy:2402077400 (cyc) 00:06:07.484 total_run_count: 3768000 00:06:07.484 tsc_hz: 2400000000 (cyc) 00:06:07.484 ====================================== 00:06:07.484 poller_cost: 637 (cyc), 265 (nsec) 00:06:07.484 00:06:07.484 real 0m1.220s 00:06:07.484 user 0m1.145s 00:06:07.484 sys 0m0.071s 00:06:07.484 23:50:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.484 23:50:37 -- common/autotest_common.sh@10 -- # set +x 00:06:07.484 ************************************ 00:06:07.484 END TEST thread_poller_perf 00:06:07.484 ************************************ 00:06:07.484 23:50:37 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:07.484 00:06:07.484 real 0m2.941s 00:06:07.484 user 0m2.473s 00:06:07.484 sys 0m0.436s 00:06:07.484 23:50:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.484 23:50:37 -- common/autotest_common.sh@10 -- # set +x 00:06:07.484 ************************************ 00:06:07.484 END TEST thread 00:06:07.484 ************************************ 00:06:07.484 23:50:37 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:07.484 23:50:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.484 23:50:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.484 23:50:37 -- common/autotest_common.sh@10 -- # set +x 00:06:07.484 ************************************ 00:06:07.484 START TEST accel 00:06:07.484 ************************************ 00:06:07.484 23:50:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:07.484 * Looking for test storage... 00:06:07.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:07.484 23:50:37 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:07.484 23:50:37 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:07.484 23:50:37 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.484 23:50:37 -- accel/accel.sh@62 -- # spdk_tgt_pid=198018 00:06:07.484 23:50:37 -- accel/accel.sh@63 -- # waitforlisten 198018 00:06:07.484 23:50:37 -- common/autotest_common.sh@817 -- # '[' -z 198018 ']' 00:06:07.484 23:50:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.484 23:50:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:07.484 23:50:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.484 23:50:37 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:07.484 23:50:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:07.484 23:50:37 -- accel/accel.sh@61 -- # build_accel_config 00:06:07.484 23:50:37 -- common/autotest_common.sh@10 -- # set +x 00:06:07.484 23:50:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.484 23:50:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.484 23:50:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.484 23:50:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.484 23:50:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.484 23:50:37 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.484 23:50:37 -- accel/accel.sh@41 -- # jq -r . 00:06:07.484 [2024-04-26 23:50:37.666314] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:07.484 [2024-04-26 23:50:37.666376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198018 ] 00:06:07.484 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.745 [2024-04-26 23:50:37.733402] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.745 [2024-04-26 23:50:37.809064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.319 23:50:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.319 23:50:38 -- common/autotest_common.sh@850 -- # return 0 00:06:08.319 23:50:38 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:08.319 23:50:38 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:08.319 23:50:38 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:08.319 23:50:38 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:08.319 23:50:38 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:08.319 23:50:38 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:08.319 23:50:38 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:08.319 23:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.319 23:50:38 -- common/autotest_common.sh@10 -- # set +x 00:06:08.319 23:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.319 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.319 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.319 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.319 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.319 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.319 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.319 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.319 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.319 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.319 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.319 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.319 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.319 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.319 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.320 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.320 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.320 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.320 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.320 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.320 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.320 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.320 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.320 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.320 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.320 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.320 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.320 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.320 23:50:38 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # IFS== 00:06:08.320 23:50:38 -- accel/accel.sh@72 -- # read -r opc module 00:06:08.320 23:50:38 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.320 23:50:38 -- accel/accel.sh@75 -- # killprocess 198018 00:06:08.320 23:50:38 -- common/autotest_common.sh@936 -- # '[' -z 198018 ']' 00:06:08.320 23:50:38 -- common/autotest_common.sh@940 -- # kill -0 198018 00:06:08.320 23:50:38 -- common/autotest_common.sh@941 -- # uname 00:06:08.581 23:50:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.581 23:50:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 198018 00:06:08.581 23:50:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.581 23:50:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.581 23:50:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 198018' 00:06:08.581 killing process with pid 198018 00:06:08.581 23:50:38 -- common/autotest_common.sh@955 -- # kill 198018 00:06:08.581 23:50:38 -- common/autotest_common.sh@960 -- # wait 198018 00:06:08.581 23:50:38 -- accel/accel.sh@76 -- # trap - ERR 00:06:08.581 23:50:38 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:08.581 23:50:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:08.581 23:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.581 23:50:38 -- common/autotest_common.sh@10 -- # set +x 00:06:08.842 23:50:38 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:08.842 23:50:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:08.842 23:50:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.842 23:50:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.842 23:50:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.842 23:50:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.842 23:50:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.842 23:50:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.842 23:50:38 -- accel/accel.sh@40 -- # local IFS=, 00:06:08.842 23:50:38 -- accel/accel.sh@41 -- # jq -r . 00:06:08.842 23:50:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.842 23:50:38 -- common/autotest_common.sh@10 -- # set +x 00:06:08.842 23:50:38 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:08.842 23:50:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:08.842 23:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.842 23:50:38 -- common/autotest_common.sh@10 -- # set +x 00:06:09.105 ************************************ 00:06:09.105 START TEST accel_missing_filename 00:06:09.105 ************************************ 00:06:09.105 23:50:39 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:09.105 23:50:39 -- common/autotest_common.sh@638 -- # local es=0 00:06:09.105 23:50:39 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:09.105 23:50:39 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:09.105 23:50:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.105 23:50:39 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:09.105 23:50:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.105 23:50:39 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:09.105 23:50:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:09.105 23:50:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.105 23:50:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.105 23:50:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.105 23:50:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.105 23:50:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.105 23:50:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.105 23:50:39 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.105 23:50:39 -- accel/accel.sh@41 -- # jq -r . 00:06:09.105 [2024-04-26 23:50:39.162192] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:09.105 [2024-04-26 23:50:39.162288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198393 ] 00:06:09.105 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.105 [2024-04-26 23:50:39.223932] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.105 [2024-04-26 23:50:39.288121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.105 [2024-04-26 23:50:39.320111] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.366 [2024-04-26 23:50:39.357253] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:09.366 A filename is required. 00:06:09.366 23:50:39 -- common/autotest_common.sh@641 -- # es=234 00:06:09.366 23:50:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:09.366 23:50:39 -- common/autotest_common.sh@650 -- # es=106 00:06:09.366 23:50:39 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:09.366 23:50:39 -- common/autotest_common.sh@658 -- # es=1 00:06:09.366 23:50:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:09.366 00:06:09.366 real 0m0.276s 00:06:09.366 user 0m0.210s 00:06:09.366 sys 0m0.108s 00:06:09.366 23:50:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.366 23:50:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.366 ************************************ 00:06:09.366 END TEST accel_missing_filename 00:06:09.366 ************************************ 00:06:09.366 23:50:39 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.366 23:50:39 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:09.366 23:50:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.366 23:50:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.366 ************************************ 00:06:09.366 START TEST accel_compress_verify 00:06:09.366 ************************************ 00:06:09.366 23:50:39 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.366 23:50:39 -- common/autotest_common.sh@638 -- # local es=0 00:06:09.366 23:50:39 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.366 23:50:39 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:09.366 23:50:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.366 23:50:39 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:09.366 23:50:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.366 23:50:39 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.366 23:50:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:09.366 23:50:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.366 23:50:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.366 23:50:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.366 23:50:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.366 23:50:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.366 23:50:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.366 23:50:39 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.366 23:50:39 -- accel/accel.sh@41 -- # jq -r . 00:06:09.628 [2024-04-26 23:50:39.603275] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:09.628 [2024-04-26 23:50:39.603340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198558 ] 00:06:09.628 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.628 [2024-04-26 23:50:39.665866] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.628 [2024-04-26 23:50:39.729872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.629 [2024-04-26 23:50:39.761811] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.629 [2024-04-26 23:50:39.799090] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:09.629 00:06:09.629 Compression does not support the verify option, aborting. 00:06:09.890 23:50:39 -- common/autotest_common.sh@641 -- # es=161 00:06:09.890 23:50:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:09.890 23:50:39 -- common/autotest_common.sh@650 -- # es=33 00:06:09.890 23:50:39 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:09.890 23:50:39 -- common/autotest_common.sh@658 -- # es=1 00:06:09.890 23:50:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:09.890 00:06:09.890 real 0m0.275s 00:06:09.890 user 0m0.209s 00:06:09.890 sys 0m0.107s 00:06:09.890 23:50:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.890 23:50:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.890 ************************************ 00:06:09.890 END TEST accel_compress_verify 00:06:09.890 ************************************ 00:06:09.890 23:50:39 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:09.890 23:50:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:09.890 23:50:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.890 23:50:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.890 ************************************ 00:06:09.890 START TEST accel_wrong_workload 00:06:09.890 ************************************ 00:06:09.890 23:50:40 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:09.890 23:50:40 -- common/autotest_common.sh@638 -- # local es=0 00:06:09.890 23:50:40 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:09.890 23:50:40 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:09.890 23:50:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.890 23:50:40 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:09.890 23:50:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.890 23:50:40 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:09.890 23:50:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:09.890 23:50:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.890 23:50:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.890 23:50:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.890 23:50:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.890 23:50:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.890 23:50:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.890 23:50:40 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.890 23:50:40 -- accel/accel.sh@41 -- # jq -r . 00:06:09.890 Unsupported workload type: foobar 00:06:09.890 [2024-04-26 23:50:40.031417] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:09.890 accel_perf options: 00:06:09.890 [-h help message] 00:06:09.890 [-q queue depth per core] 00:06:09.890 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:09.890 [-T number of threads per core 00:06:09.890 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:09.890 [-t time in seconds] 00:06:09.890 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:09.890 [ dif_verify, , dif_generate, dif_generate_copy 00:06:09.890 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:09.890 [-l for compress/decompress workloads, name of uncompressed input file 00:06:09.890 [-S for crc32c workload, use this seed value (default 0) 00:06:09.890 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:09.890 [-f for fill workload, use this BYTE value (default 255) 00:06:09.890 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:09.890 [-y verify result if this switch is on] 00:06:09.890 [-a tasks to allocate per core (default: same value as -q)] 00:06:09.890 Can be used to spread operations across a wider range of memory. 00:06:09.890 23:50:40 -- common/autotest_common.sh@641 -- # es=1 00:06:09.890 23:50:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:09.890 23:50:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:09.890 23:50:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:09.890 00:06:09.890 real 0m0.034s 00:06:09.890 user 0m0.019s 00:06:09.890 sys 0m0.015s 00:06:09.890 23:50:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.890 23:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:09.890 ************************************ 00:06:09.890 END TEST accel_wrong_workload 00:06:09.890 ************************************ 00:06:09.890 Error: writing output failed: Broken pipe 00:06:09.890 23:50:40 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:09.890 23:50:40 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:09.890 23:50:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.890 23:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.153 ************************************ 00:06:10.153 START TEST accel_negative_buffers 00:06:10.153 ************************************ 00:06:10.153 23:50:40 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:10.153 23:50:40 -- common/autotest_common.sh@638 -- # local es=0 00:06:10.153 23:50:40 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:10.153 23:50:40 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:10.153 23:50:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:10.153 23:50:40 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:10.153 23:50:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:10.153 23:50:40 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:10.153 23:50:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:10.153 23:50:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.153 23:50:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.153 23:50:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.153 23:50:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.153 23:50:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.153 23:50:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.153 23:50:40 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.153 23:50:40 -- accel/accel.sh@41 -- # jq -r . 00:06:10.153 -x option must be non-negative. 00:06:10.153 [2024-04-26 23:50:40.234497] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:10.153 accel_perf options: 00:06:10.153 [-h help message] 00:06:10.153 [-q queue depth per core] 00:06:10.153 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:10.153 [-T number of threads per core 00:06:10.153 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:10.153 [-t time in seconds] 00:06:10.153 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:10.153 [ dif_verify, , dif_generate, dif_generate_copy 00:06:10.153 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:10.153 [-l for compress/decompress workloads, name of uncompressed input file 00:06:10.153 [-S for crc32c workload, use this seed value (default 0) 00:06:10.153 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:10.153 [-f for fill workload, use this BYTE value (default 255) 00:06:10.153 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:10.153 [-y verify result if this switch is on] 00:06:10.153 [-a tasks to allocate per core (default: same value as -q)] 00:06:10.153 Can be used to spread operations across a wider range of memory. 00:06:10.153 23:50:40 -- common/autotest_common.sh@641 -- # es=1 00:06:10.153 23:50:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:10.153 23:50:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:10.153 23:50:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:10.153 00:06:10.153 real 0m0.033s 00:06:10.154 user 0m0.017s 00:06:10.154 sys 0m0.015s 00:06:10.154 23:50:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.154 23:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.154 ************************************ 00:06:10.154 END TEST accel_negative_buffers 00:06:10.154 ************************************ 00:06:10.154 Error: writing output failed: Broken pipe 00:06:10.154 23:50:40 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:10.154 23:50:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:10.154 23:50:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.154 23:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.416 ************************************ 00:06:10.416 START TEST accel_crc32c 00:06:10.416 ************************************ 00:06:10.416 23:50:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:10.416 23:50:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.416 23:50:40 -- accel/accel.sh@17 -- # local accel_module 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:10.416 23:50:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:10.416 23:50:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.416 23:50:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.416 23:50:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.416 23:50:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.416 23:50:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.416 23:50:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.416 23:50:40 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.416 23:50:40 -- accel/accel.sh@41 -- # jq -r . 00:06:10.416 [2024-04-26 23:50:40.439670] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:10.416 [2024-04-26 23:50:40.439768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198828 ] 00:06:10.416 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.416 [2024-04-26 23:50:40.505897] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.416 [2024-04-26 23:50:40.581548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val= 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val= 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val=0x1 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val= 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val= 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val=crc32c 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val=32 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val= 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val=software 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@22 -- # accel_module=software 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val=32 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val=32 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val=1 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val=Yes 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val= 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.416 23:50:40 -- accel/accel.sh@20 -- # val= 00:06:10.416 23:50:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.416 23:50:40 -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 23:50:41 -- accel/accel.sh@20 -- # val= 00:06:11.825 23:50:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 23:50:41 -- accel/accel.sh@20 -- # val= 00:06:11.825 23:50:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 23:50:41 -- accel/accel.sh@20 -- # val= 00:06:11.825 23:50:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 23:50:41 -- accel/accel.sh@20 -- # val= 00:06:11.825 23:50:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 23:50:41 -- accel/accel.sh@20 -- # val= 00:06:11.825 23:50:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 23:50:41 -- accel/accel.sh@20 -- # val= 00:06:11.825 23:50:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 23:50:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.825 23:50:41 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:11.825 23:50:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.825 00:06:11.825 real 0m1.297s 00:06:11.825 user 0m1.191s 00:06:11.825 sys 0m0.108s 00:06:11.825 23:50:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.825 23:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.825 ************************************ 00:06:11.825 END TEST accel_crc32c 00:06:11.825 ************************************ 00:06:11.825 23:50:41 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:11.825 23:50:41 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:11.825 23:50:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.825 23:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.825 ************************************ 00:06:11.825 START TEST accel_crc32c_C2 00:06:11.825 ************************************ 00:06:11.825 23:50:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:11.825 23:50:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.825 23:50:41 -- accel/accel.sh@17 -- # local accel_module 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 23:50:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 23:50:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:11.825 23:50:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:11.825 23:50:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.825 23:50:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.825 23:50:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.825 23:50:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.825 23:50:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.825 23:50:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.825 23:50:41 -- accel/accel.sh@40 -- # local IFS=, 00:06:11.825 23:50:41 -- accel/accel.sh@41 -- # jq -r . 00:06:11.825 [2024-04-26 23:50:41.882494] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:11.825 [2024-04-26 23:50:41.882585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199185 ] 00:06:11.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.825 [2024-04-26 23:50:41.945403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.825 [2024-04-26 23:50:42.012016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.825 23:50:42 -- accel/accel.sh@20 -- # val= 00:06:11.825 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:11.825 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:11.825 23:50:42 -- accel/accel.sh@20 -- # val= 00:06:11.825 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.825 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val=0x1 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val= 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val= 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val=crc32c 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val=0 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val= 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val=software 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val=32 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val=32 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val=1 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val=Yes 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val= 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.086 23:50:42 -- accel/accel.sh@20 -- # val= 00:06:12.086 23:50:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.086 23:50:42 -- accel/accel.sh@19 -- # read -r var val 00:06:13.028 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.028 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.028 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.028 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.028 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.028 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.028 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.028 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.028 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.028 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.028 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.028 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.028 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.028 23:50:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.028 23:50:43 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:13.028 23:50:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.028 00:06:13.028 real 0m1.283s 00:06:13.028 user 0m1.190s 00:06:13.028 sys 0m0.095s 00:06:13.028 23:50:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.028 23:50:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.028 ************************************ 00:06:13.028 END TEST accel_crc32c_C2 00:06:13.028 ************************************ 00:06:13.028 23:50:43 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:13.028 23:50:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:13.028 23:50:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.028 23:50:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.289 ************************************ 00:06:13.289 START TEST accel_copy 00:06:13.289 ************************************ 00:06:13.289 23:50:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:13.289 23:50:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.289 23:50:43 -- accel/accel.sh@17 -- # local accel_module 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:13.289 23:50:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:13.289 23:50:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.289 23:50:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.289 23:50:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.289 23:50:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.289 23:50:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.289 23:50:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.289 23:50:43 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.289 23:50:43 -- accel/accel.sh@41 -- # jq -r . 00:06:13.289 [2024-04-26 23:50:43.328505] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:13.289 [2024-04-26 23:50:43.328583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199530 ] 00:06:13.289 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.289 [2024-04-26 23:50:43.391588] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.289 [2024-04-26 23:50:43.460175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val=0x1 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val=copy 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val=software 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@22 -- # accel_module=software 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val=32 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val=32 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val=1 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val=Yes 00:06:13.289 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.289 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.289 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.290 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.290 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.290 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.290 23:50:43 -- accel/accel.sh@20 -- # val= 00:06:13.290 23:50:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.290 23:50:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.290 23:50:43 -- accel/accel.sh@19 -- # read -r var val 00:06:14.677 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.677 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.677 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.677 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.677 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.677 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.677 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.677 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.677 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.677 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.677 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.677 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.677 23:50:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.677 23:50:44 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:14.677 23:50:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.677 00:06:14.677 real 0m1.283s 00:06:14.677 user 0m1.189s 00:06:14.677 sys 0m0.096s 00:06:14.677 23:50:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.677 23:50:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.677 ************************************ 00:06:14.677 END TEST accel_copy 00:06:14.677 ************************************ 00:06:14.677 23:50:44 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.677 23:50:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:14.677 23:50:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.677 23:50:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.677 ************************************ 00:06:14.677 START TEST accel_fill 00:06:14.677 ************************************ 00:06:14.677 23:50:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.677 23:50:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.677 23:50:44 -- accel/accel.sh@17 -- # local accel_module 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.677 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.677 23:50:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.677 23:50:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.677 23:50:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.677 23:50:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.677 23:50:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.677 23:50:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.677 23:50:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.677 23:50:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.677 23:50:44 -- accel/accel.sh@40 -- # local IFS=, 00:06:14.677 23:50:44 -- accel/accel.sh@41 -- # jq -r . 00:06:14.677 [2024-04-26 23:50:44.768230] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:14.677 [2024-04-26 23:50:44.768320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199754 ] 00:06:14.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.677 [2024-04-26 23:50:44.830463] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.677 [2024-04-26 23:50:44.896953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val=0x1 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val=fill 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val=0x80 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val=software 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@22 -- # accel_module=software 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val=64 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val=64 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val=1 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val=Yes 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.940 23:50:44 -- accel/accel.sh@20 -- # val= 00:06:14.940 23:50:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.940 23:50:44 -- accel/accel.sh@19 -- # read -r var val 00:06:15.886 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:15.886 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:15.886 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:15.886 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:15.886 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:15.886 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:15.886 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:15.886 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:15.886 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:15.886 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:15.886 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:15.886 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:15.886 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:15.886 23:50:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.886 23:50:46 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:15.886 23:50:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.886 00:06:15.886 real 0m1.282s 00:06:15.886 user 0m1.189s 00:06:15.886 sys 0m0.096s 00:06:15.886 23:50:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.886 23:50:46 -- common/autotest_common.sh@10 -- # set +x 00:06:15.886 ************************************ 00:06:15.886 END TEST accel_fill 00:06:15.886 ************************************ 00:06:15.886 23:50:46 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:15.886 23:50:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:15.886 23:50:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.886 23:50:46 -- common/autotest_common.sh@10 -- # set +x 00:06:16.147 ************************************ 00:06:16.147 START TEST accel_copy_crc32c 00:06:16.147 ************************************ 00:06:16.147 23:50:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:16.147 23:50:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.147 23:50:46 -- accel/accel.sh@17 -- # local accel_module 00:06:16.147 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.147 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.147 23:50:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:16.147 23:50:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:16.147 23:50:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.147 23:50:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.147 23:50:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.147 23:50:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.147 23:50:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.147 23:50:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.147 23:50:46 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.147 23:50:46 -- accel/accel.sh@41 -- # jq -r . 00:06:16.147 [2024-04-26 23:50:46.232608] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:16.147 [2024-04-26 23:50:46.232698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200001 ] 00:06:16.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.147 [2024-04-26 23:50:46.294886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.147 [2024-04-26 23:50:46.360488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val=0x1 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val=0 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val=software 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val=32 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val=32 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val=1 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val=Yes 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.409 23:50:46 -- accel/accel.sh@20 -- # val= 00:06:16.409 23:50:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.409 23:50:46 -- accel/accel.sh@19 -- # read -r var val 00:06:17.353 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.353 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.354 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.354 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.354 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.354 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.354 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.354 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.354 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.354 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.354 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.354 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.354 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.354 23:50:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.354 23:50:47 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:17.354 23:50:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.354 00:06:17.354 real 0m1.281s 00:06:17.354 user 0m1.182s 00:06:17.354 sys 0m0.100s 00:06:17.354 23:50:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.354 23:50:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.354 ************************************ 00:06:17.354 END TEST accel_copy_crc32c 00:06:17.354 ************************************ 00:06:17.354 23:50:47 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:17.354 23:50:47 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:17.354 23:50:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.354 23:50:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.615 ************************************ 00:06:17.615 START TEST accel_copy_crc32c_C2 00:06:17.615 ************************************ 00:06:17.615 23:50:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:17.615 23:50:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.615 23:50:47 -- accel/accel.sh@17 -- # local accel_module 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:17.615 23:50:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:17.615 23:50:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.615 23:50:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.615 23:50:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.615 23:50:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.615 23:50:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.615 23:50:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.615 23:50:47 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.615 23:50:47 -- accel/accel.sh@41 -- # jq -r . 00:06:17.615 [2024-04-26 23:50:47.664089] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:17.615 [2024-04-26 23:50:47.664177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200306 ] 00:06:17.615 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.615 [2024-04-26 23:50:47.725058] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.615 [2024-04-26 23:50:47.788824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val=0x1 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val=0 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val=software 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val=32 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val=32 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val=1 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val=Yes 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.615 23:50:47 -- accel/accel.sh@20 -- # val= 00:06:17.615 23:50:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.615 23:50:47 -- accel/accel.sh@19 -- # read -r var val 00:06:19.021 23:50:48 -- accel/accel.sh@20 -- # val= 00:06:19.021 23:50:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # IFS=: 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # read -r var val 00:06:19.021 23:50:48 -- accel/accel.sh@20 -- # val= 00:06:19.021 23:50:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # IFS=: 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # read -r var val 00:06:19.021 23:50:48 -- accel/accel.sh@20 -- # val= 00:06:19.021 23:50:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # IFS=: 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # read -r var val 00:06:19.021 23:50:48 -- accel/accel.sh@20 -- # val= 00:06:19.021 23:50:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # IFS=: 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # read -r var val 00:06:19.021 23:50:48 -- accel/accel.sh@20 -- # val= 00:06:19.021 23:50:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # IFS=: 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # read -r var val 00:06:19.021 23:50:48 -- accel/accel.sh@20 -- # val= 00:06:19.021 23:50:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # IFS=: 00:06:19.021 23:50:48 -- accel/accel.sh@19 -- # read -r var val 00:06:19.021 23:50:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.021 23:50:48 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:19.021 23:50:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.021 00:06:19.021 real 0m1.277s 00:06:19.021 user 0m1.179s 00:06:19.021 sys 0m0.100s 00:06:19.021 23:50:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.021 23:50:48 -- common/autotest_common.sh@10 -- # set +x 00:06:19.021 ************************************ 00:06:19.021 END TEST accel_copy_crc32c_C2 00:06:19.021 ************************************ 00:06:19.021 23:50:48 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:19.021 23:50:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.021 23:50:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.021 23:50:48 -- common/autotest_common.sh@10 -- # set +x 00:06:19.021 ************************************ 00:06:19.021 START TEST accel_dualcast 00:06:19.021 ************************************ 00:06:19.021 23:50:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:19.021 23:50:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.021 23:50:49 -- accel/accel.sh@17 -- # local accel_module 00:06:19.021 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.021 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.021 23:50:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:19.021 23:50:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:19.021 23:50:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.021 23:50:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.021 23:50:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.021 23:50:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.021 23:50:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.021 23:50:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.021 23:50:49 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.021 23:50:49 -- accel/accel.sh@41 -- # jq -r . 00:06:19.021 [2024-04-26 23:50:49.109314] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:19.021 [2024-04-26 23:50:49.109402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200661 ] 00:06:19.021 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.021 [2024-04-26 23:50:49.181217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.282 [2024-04-26 23:50:49.246169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val= 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val= 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val=0x1 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val= 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val= 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val=dualcast 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val= 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val=software 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val=32 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val=32 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val=1 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val=Yes 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val= 00:06:19.282 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.282 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.282 23:50:49 -- accel/accel.sh@20 -- # val= 00:06:19.283 23:50:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.283 23:50:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.283 23:50:49 -- accel/accel.sh@19 -- # read -r var val 00:06:20.227 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.227 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.227 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.227 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.227 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.227 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.227 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.227 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.227 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.227 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.227 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.227 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.227 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.227 23:50:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.227 23:50:50 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:20.227 23:50:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.227 00:06:20.227 real 0m1.290s 00:06:20.227 user 0m1.187s 00:06:20.227 sys 0m0.104s 00:06:20.227 23:50:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.227 23:50:50 -- common/autotest_common.sh@10 -- # set +x 00:06:20.227 ************************************ 00:06:20.227 END TEST accel_dualcast 00:06:20.227 ************************************ 00:06:20.227 23:50:50 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:20.227 23:50:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:20.227 23:50:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.227 23:50:50 -- common/autotest_common.sh@10 -- # set +x 00:06:20.488 ************************************ 00:06:20.488 START TEST accel_compare 00:06:20.488 ************************************ 00:06:20.488 23:50:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:20.488 23:50:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.488 23:50:50 -- accel/accel.sh@17 -- # local accel_module 00:06:20.488 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.488 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.488 23:50:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:20.488 23:50:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:20.488 23:50:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.488 23:50:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.488 23:50:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.488 23:50:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.488 23:50:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.488 23:50:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.488 23:50:50 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.488 23:50:50 -- accel/accel.sh@41 -- # jq -r . 00:06:20.488 [2024-04-26 23:50:50.560096] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:20.489 [2024-04-26 23:50:50.560155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201024 ] 00:06:20.489 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.489 [2024-04-26 23:50:50.621043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.489 [2024-04-26 23:50:50.684733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val=0x1 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val=compare 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val=software 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val=32 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val=32 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val=1 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val=Yes 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:20.750 23:50:50 -- accel/accel.sh@20 -- # val= 00:06:20.750 23:50:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # IFS=: 00:06:20.750 23:50:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 23:50:51 -- accel/accel.sh@20 -- # val= 00:06:21.695 23:50:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 23:50:51 -- accel/accel.sh@20 -- # val= 00:06:21.695 23:50:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 23:50:51 -- accel/accel.sh@20 -- # val= 00:06:21.695 23:50:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 23:50:51 -- accel/accel.sh@20 -- # val= 00:06:21.695 23:50:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 23:50:51 -- accel/accel.sh@20 -- # val= 00:06:21.695 23:50:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 23:50:51 -- accel/accel.sh@20 -- # val= 00:06:21.695 23:50:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # IFS=: 00:06:21.695 23:50:51 -- accel/accel.sh@19 -- # read -r var val 00:06:21.695 23:50:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.695 23:50:51 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:21.695 23:50:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.695 00:06:21.695 real 0m1.276s 00:06:21.695 user 0m1.177s 00:06:21.695 sys 0m0.100s 00:06:21.695 23:50:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.695 23:50:51 -- common/autotest_common.sh@10 -- # set +x 00:06:21.695 ************************************ 00:06:21.695 END TEST accel_compare 00:06:21.695 ************************************ 00:06:21.695 23:50:51 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:21.695 23:50:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:21.695 23:50:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.695 23:50:51 -- common/autotest_common.sh@10 -- # set +x 00:06:21.957 ************************************ 00:06:21.957 START TEST accel_xor 00:06:21.957 ************************************ 00:06:21.957 23:50:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:21.957 23:50:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.957 23:50:51 -- accel/accel.sh@17 -- # local accel_module 00:06:21.957 23:50:51 -- accel/accel.sh@19 -- # IFS=: 00:06:21.957 23:50:51 -- accel/accel.sh@19 -- # read -r var val 00:06:21.957 23:50:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:21.957 23:50:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:21.957 23:50:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.957 23:50:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.957 23:50:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.957 23:50:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.957 23:50:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.957 23:50:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.957 23:50:51 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.957 23:50:51 -- accel/accel.sh@41 -- # jq -r . 00:06:21.957 [2024-04-26 23:50:52.014592] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:21.957 [2024-04-26 23:50:52.014681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201377 ] 00:06:21.957 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.957 [2024-04-26 23:50:52.076673] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.957 [2024-04-26 23:50:52.139703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.957 23:50:52 -- accel/accel.sh@20 -- # val= 00:06:21.957 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.957 23:50:52 -- accel/accel.sh@20 -- # val= 00:06:21.957 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.957 23:50:52 -- accel/accel.sh@20 -- # val=0x1 00:06:21.957 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.957 23:50:52 -- accel/accel.sh@20 -- # val= 00:06:21.957 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.957 23:50:52 -- accel/accel.sh@20 -- # val= 00:06:21.957 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.957 23:50:52 -- accel/accel.sh@20 -- # val=xor 00:06:21.957 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.957 23:50:52 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.957 23:50:52 -- accel/accel.sh@20 -- # val=2 00:06:21.957 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.957 23:50:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.957 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.957 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.957 23:50:52 -- accel/accel.sh@20 -- # val= 00:06:21.958 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.958 23:50:52 -- accel/accel.sh@20 -- # val=software 00:06:21.958 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.958 23:50:52 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.958 23:50:52 -- accel/accel.sh@20 -- # val=32 00:06:21.958 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.958 23:50:52 -- accel/accel.sh@20 -- # val=32 00:06:21.958 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.958 23:50:52 -- accel/accel.sh@20 -- # val=1 00:06:21.958 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.958 23:50:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.958 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.958 23:50:52 -- accel/accel.sh@20 -- # val=Yes 00:06:21.958 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.958 23:50:52 -- accel/accel.sh@20 -- # val= 00:06:21.958 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:21.958 23:50:52 -- accel/accel.sh@20 -- # val= 00:06:21.958 23:50:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # IFS=: 00:06:21.958 23:50:52 -- accel/accel.sh@19 -- # read -r var val 00:06:23.349 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.349 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.349 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.349 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.349 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.349 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.349 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.349 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.349 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.349 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.349 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.349 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.349 23:50:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.349 23:50:53 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:23.349 23:50:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.349 00:06:23.349 real 0m1.278s 00:06:23.349 user 0m1.185s 00:06:23.349 sys 0m0.096s 00:06:23.349 23:50:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.349 23:50:53 -- common/autotest_common.sh@10 -- # set +x 00:06:23.349 ************************************ 00:06:23.349 END TEST accel_xor 00:06:23.349 ************************************ 00:06:23.349 23:50:53 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:23.349 23:50:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:23.349 23:50:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.349 23:50:53 -- common/autotest_common.sh@10 -- # set +x 00:06:23.349 ************************************ 00:06:23.349 START TEST accel_xor 00:06:23.349 ************************************ 00:06:23.349 23:50:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:23.349 23:50:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.349 23:50:53 -- accel/accel.sh@17 -- # local accel_module 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.349 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.349 23:50:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:23.349 23:50:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:23.349 23:50:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.349 23:50:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.349 23:50:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.349 23:50:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.349 23:50:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.349 23:50:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.349 23:50:53 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.349 23:50:53 -- accel/accel.sh@41 -- # jq -r . 00:06:23.349 [2024-04-26 23:50:53.472161] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:23.349 [2024-04-26 23:50:53.472224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201727 ] 00:06:23.349 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.349 [2024-04-26 23:50:53.533708] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.609 [2024-04-26 23:50:53.598565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val=0x1 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val=xor 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val=3 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val=software 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val=32 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val=32 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val=1 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val=Yes 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.609 23:50:53 -- accel/accel.sh@20 -- # val= 00:06:23.609 23:50:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.609 23:50:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.550 23:50:54 -- accel/accel.sh@20 -- # val= 00:06:24.550 23:50:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # IFS=: 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # read -r var val 00:06:24.550 23:50:54 -- accel/accel.sh@20 -- # val= 00:06:24.550 23:50:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # IFS=: 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # read -r var val 00:06:24.550 23:50:54 -- accel/accel.sh@20 -- # val= 00:06:24.550 23:50:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # IFS=: 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # read -r var val 00:06:24.550 23:50:54 -- accel/accel.sh@20 -- # val= 00:06:24.550 23:50:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # IFS=: 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # read -r var val 00:06:24.550 23:50:54 -- accel/accel.sh@20 -- # val= 00:06:24.550 23:50:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # IFS=: 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # read -r var val 00:06:24.550 23:50:54 -- accel/accel.sh@20 -- # val= 00:06:24.550 23:50:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # IFS=: 00:06:24.550 23:50:54 -- accel/accel.sh@19 -- # read -r var val 00:06:24.550 23:50:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.550 23:50:54 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:24.550 23:50:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.550 00:06:24.550 real 0m1.279s 00:06:24.550 user 0m1.181s 00:06:24.550 sys 0m0.101s 00:06:24.550 23:50:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.550 23:50:54 -- common/autotest_common.sh@10 -- # set +x 00:06:24.550 ************************************ 00:06:24.550 END TEST accel_xor 00:06:24.550 ************************************ 00:06:24.550 23:50:54 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:24.550 23:50:54 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:24.550 23:50:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.550 23:50:54 -- common/autotest_common.sh@10 -- # set +x 00:06:24.809 ************************************ 00:06:24.809 START TEST accel_dif_verify 00:06:24.809 ************************************ 00:06:24.809 23:50:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:24.809 23:50:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.809 23:50:54 -- accel/accel.sh@17 -- # local accel_module 00:06:24.809 23:50:54 -- accel/accel.sh@19 -- # IFS=: 00:06:24.809 23:50:54 -- accel/accel.sh@19 -- # read -r var val 00:06:24.809 23:50:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:24.809 23:50:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:24.809 23:50:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.809 23:50:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.809 23:50:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.809 23:50:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.809 23:50:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.809 23:50:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.809 23:50:54 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.809 23:50:54 -- accel/accel.sh@41 -- # jq -r . 00:06:24.809 [2024-04-26 23:50:54.903520] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:24.809 [2024-04-26 23:50:54.903616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201962 ] 00:06:24.809 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.809 [2024-04-26 23:50:54.965760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.070 [2024-04-26 23:50:55.030564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val= 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val= 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val=0x1 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val= 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val= 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val=dif_verify 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val= 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val=software 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val=32 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val=32 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val=1 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val=No 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val= 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.070 23:50:55 -- accel/accel.sh@20 -- # val= 00:06:25.070 23:50:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.070 23:50:55 -- accel/accel.sh@19 -- # read -r var val 00:06:26.009 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.009 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.009 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.009 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.009 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.009 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.009 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.009 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.009 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.009 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.009 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.009 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.009 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.009 23:50:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.009 23:50:56 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:26.009 23:50:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.009 00:06:26.009 real 0m1.280s 00:06:26.009 user 0m1.176s 00:06:26.009 sys 0m0.106s 00:06:26.009 23:50:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.009 23:50:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.009 ************************************ 00:06:26.009 END TEST accel_dif_verify 00:06:26.009 ************************************ 00:06:26.009 23:50:56 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:26.009 23:50:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:26.009 23:50:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.009 23:50:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.269 ************************************ 00:06:26.269 START TEST accel_dif_generate 00:06:26.269 ************************************ 00:06:26.269 23:50:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:26.269 23:50:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.269 23:50:56 -- accel/accel.sh@17 -- # local accel_module 00:06:26.269 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.269 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.269 23:50:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:26.269 23:50:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:26.269 23:50:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.269 23:50:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.269 23:50:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.269 23:50:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.269 23:50:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.269 23:50:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.269 23:50:56 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.269 23:50:56 -- accel/accel.sh@41 -- # jq -r . 00:06:26.269 [2024-04-26 23:50:56.365424] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:26.269 [2024-04-26 23:50:56.365501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202210 ] 00:06:26.269 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.269 [2024-04-26 23:50:56.428408] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.529 [2024-04-26 23:50:56.493304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.529 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.529 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.529 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.529 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.529 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.529 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.529 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.529 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.529 23:50:56 -- accel/accel.sh@20 -- # val=0x1 00:06:26.529 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.529 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.529 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.529 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.529 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.529 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val=dif_generate 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val=software 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@22 -- # accel_module=software 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val=32 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val=32 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val=1 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val=No 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:26.530 23:50:56 -- accel/accel.sh@20 -- # val= 00:06:26.530 23:50:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # IFS=: 00:06:26.530 23:50:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.471 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.471 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.471 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.471 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.471 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.471 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.471 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.471 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.471 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.471 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.471 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.471 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.471 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.471 23:50:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.471 23:50:57 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:27.471 23:50:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.471 00:06:27.471 real 0m1.280s 00:06:27.471 user 0m0.005s 00:06:27.471 sys 0m0.001s 00:06:27.471 23:50:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.471 23:50:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.471 ************************************ 00:06:27.471 END TEST accel_dif_generate 00:06:27.471 ************************************ 00:06:27.471 23:50:57 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:27.471 23:50:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:27.471 23:50:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.471 23:50:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.733 ************************************ 00:06:27.733 START TEST accel_dif_generate_copy 00:06:27.733 ************************************ 00:06:27.733 23:50:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:27.733 23:50:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.733 23:50:57 -- accel/accel.sh@17 -- # local accel_module 00:06:27.733 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.733 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.733 23:50:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:27.733 23:50:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:27.733 23:50:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.733 23:50:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.733 23:50:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.733 23:50:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.733 23:50:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.733 23:50:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.733 23:50:57 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.733 23:50:57 -- accel/accel.sh@41 -- # jq -r . 00:06:27.733 [2024-04-26 23:50:57.828085] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:27.733 [2024-04-26 23:50:57.828145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202498 ] 00:06:27.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.733 [2024-04-26 23:50:57.889626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.994 [2024-04-26 23:50:57.954545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val=0x1 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val=software 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val=32 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val=32 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val=1 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val=No 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:27.994 23:50:57 -- accel/accel.sh@20 -- # val= 00:06:27.994 23:50:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # IFS=: 00:06:27.994 23:50:57 -- accel/accel.sh@19 -- # read -r var val 00:06:28.936 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:28.936 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:28.936 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:28.936 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:28.936 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:28.936 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:28.936 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:28.936 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:28.936 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:28.936 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:28.936 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:28.936 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:28.936 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:28.936 23:50:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.936 23:50:59 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:28.936 23:50:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.936 00:06:28.936 real 0m1.277s 00:06:28.936 user 0m0.004s 00:06:28.936 sys 0m0.001s 00:06:28.936 23:50:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.936 23:50:59 -- common/autotest_common.sh@10 -- # set +x 00:06:28.936 ************************************ 00:06:28.936 END TEST accel_dif_generate_copy 00:06:28.936 ************************************ 00:06:28.936 23:50:59 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:28.936 23:50:59 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.936 23:50:59 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:28.936 23:50:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.936 23:50:59 -- common/autotest_common.sh@10 -- # set +x 00:06:29.197 ************************************ 00:06:29.197 START TEST accel_comp 00:06:29.197 ************************************ 00:06:29.197 23:50:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.197 23:50:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.197 23:50:59 -- accel/accel.sh@17 -- # local accel_module 00:06:29.197 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.197 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.197 23:50:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.197 23:50:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.197 23:50:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.197 23:50:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.197 23:50:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.197 23:50:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.197 23:50:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.197 23:50:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.197 23:50:59 -- accel/accel.sh@40 -- # local IFS=, 00:06:29.197 23:50:59 -- accel/accel.sh@41 -- # jq -r . 00:06:29.197 [2024-04-26 23:50:59.291278] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:29.197 [2024-04-26 23:50:59.291343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202856 ] 00:06:29.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.197 [2024-04-26 23:50:59.355311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.458 [2024-04-26 23:50:59.426142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val=0x1 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val=compress 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val=software 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@22 -- # accel_module=software 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val=32 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val=32 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val=1 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val=No 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:29.458 23:50:59 -- accel/accel.sh@20 -- # val= 00:06:29.458 23:50:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # IFS=: 00:06:29.458 23:50:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.399 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.399 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.399 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.399 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.399 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.399 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.399 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.399 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.399 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.399 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.399 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.399 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.399 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.399 23:51:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.399 23:51:00 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:30.399 23:51:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.399 00:06:30.399 real 0m1.290s 00:06:30.399 user 0m0.005s 00:06:30.399 sys 0m0.001s 00:06:30.399 23:51:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.399 23:51:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.399 ************************************ 00:06:30.399 END TEST accel_comp 00:06:30.399 ************************************ 00:06:30.399 23:51:00 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.399 23:51:00 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:30.399 23:51:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.399 23:51:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.661 ************************************ 00:06:30.661 START TEST accel_decomp 00:06:30.661 ************************************ 00:06:30.661 23:51:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.661 23:51:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.661 23:51:00 -- accel/accel.sh@17 -- # local accel_module 00:06:30.661 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.661 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.661 23:51:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.661 23:51:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.661 23:51:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.661 23:51:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.661 23:51:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.661 23:51:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.661 23:51:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.661 23:51:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.661 23:51:00 -- accel/accel.sh@40 -- # local IFS=, 00:06:30.661 23:51:00 -- accel/accel.sh@41 -- # jq -r . 00:06:30.661 [2024-04-26 23:51:00.756007] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:30.661 [2024-04-26 23:51:00.756067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203259 ] 00:06:30.661 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.661 [2024-04-26 23:51:00.819224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.922 [2024-04-26 23:51:00.885100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val=0x1 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val=decompress 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val=software 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@22 -- # accel_module=software 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val=32 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val=32 00:06:30.922 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.922 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.922 23:51:00 -- accel/accel.sh@20 -- # val=1 00:06:30.923 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.923 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.923 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.923 23:51:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.923 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.923 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.923 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.923 23:51:00 -- accel/accel.sh@20 -- # val=Yes 00:06:30.923 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.923 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.923 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.923 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.923 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.923 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.923 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:30.923 23:51:00 -- accel/accel.sh@20 -- # val= 00:06:30.923 23:51:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.923 23:51:00 -- accel/accel.sh@19 -- # IFS=: 00:06:30.923 23:51:00 -- accel/accel.sh@19 -- # read -r var val 00:06:31.863 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:31.863 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.863 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:31.863 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:31.863 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:31.863 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.863 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:31.863 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:31.863 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:31.863 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.863 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:31.863 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:31.863 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:31.863 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.864 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:31.864 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:31.864 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:31.864 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.864 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:31.864 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:31.864 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:31.864 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.864 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:31.864 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:31.864 23:51:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.864 23:51:02 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.864 23:51:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.864 00:06:31.864 real 0m1.285s 00:06:31.864 user 0m0.006s 00:06:31.864 sys 0m0.001s 00:06:31.864 23:51:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.864 23:51:02 -- common/autotest_common.sh@10 -- # set +x 00:06:31.864 ************************************ 00:06:31.864 END TEST accel_decomp 00:06:31.864 ************************************ 00:06:31.864 23:51:02 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:31.864 23:51:02 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:31.864 23:51:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.864 23:51:02 -- common/autotest_common.sh@10 -- # set +x 00:06:32.124 ************************************ 00:06:32.124 START TEST accel_decmop_full 00:06:32.124 ************************************ 00:06:32.124 23:51:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.124 23:51:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.124 23:51:02 -- accel/accel.sh@17 -- # local accel_module 00:06:32.124 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.124 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.124 23:51:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.124 23:51:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.124 23:51:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.124 23:51:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.124 23:51:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.124 23:51:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.124 23:51:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.124 23:51:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.124 23:51:02 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.124 23:51:02 -- accel/accel.sh@41 -- # jq -r . 00:06:32.124 [2024-04-26 23:51:02.225232] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:32.124 [2024-04-26 23:51:02.225324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203680 ] 00:06:32.124 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.124 [2024-04-26 23:51:02.293120] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.384 [2024-04-26 23:51:02.365255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.384 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:32.384 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.384 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.384 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.384 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:32.384 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.384 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.384 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.384 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:32.384 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.384 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.384 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.384 23:51:02 -- accel/accel.sh@20 -- # val=0x1 00:06:32.384 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val=decompress 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val=software 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@22 -- # accel_module=software 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val=32 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val=32 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val=1 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val=Yes 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.385 23:51:02 -- accel/accel.sh@20 -- # val= 00:06:32.385 23:51:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.385 23:51:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.422 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.422 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.422 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.422 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.422 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.422 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.422 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.422 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.422 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.422 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.422 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.422 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.422 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.422 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.422 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.422 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.422 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.422 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.422 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.422 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.423 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.423 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.423 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.423 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.423 23:51:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.423 23:51:03 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.423 23:51:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.423 00:06:33.423 real 0m1.311s 00:06:33.423 user 0m1.205s 00:06:33.423 sys 0m0.106s 00:06:33.423 23:51:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.423 23:51:03 -- common/autotest_common.sh@10 -- # set +x 00:06:33.423 ************************************ 00:06:33.423 END TEST accel_decmop_full 00:06:33.423 ************************************ 00:06:33.423 23:51:03 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:33.423 23:51:03 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:33.423 23:51:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.423 23:51:03 -- common/autotest_common.sh@10 -- # set +x 00:06:33.683 ************************************ 00:06:33.683 START TEST accel_decomp_mcore 00:06:33.683 ************************************ 00:06:33.683 23:51:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:33.683 23:51:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.683 23:51:03 -- accel/accel.sh@17 -- # local accel_module 00:06:33.683 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.683 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.683 23:51:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:33.683 23:51:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:33.683 23:51:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.683 23:51:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.683 23:51:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.683 23:51:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.683 23:51:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.683 23:51:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.683 23:51:03 -- accel/accel.sh@40 -- # local IFS=, 00:06:33.683 23:51:03 -- accel/accel.sh@41 -- # jq -r . 00:06:33.683 [2024-04-26 23:51:03.725051] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:33.683 [2024-04-26 23:51:03.725149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204044 ] 00:06:33.683 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.683 [2024-04-26 23:51:03.792018] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.683 [2024-04-26 23:51:03.867724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.683 [2024-04-26 23:51:03.867850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.683 [2024-04-26 23:51:03.867959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.683 [2024-04-26 23:51:03.867959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val=0xf 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val=decompress 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val=software 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@22 -- # accel_module=software 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val=32 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val=32 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val=1 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val=Yes 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 23:51:03 -- accel/accel.sh@20 -- # val= 00:06:33.944 23:51:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 23:51:03 -- accel/accel.sh@19 -- # read -r var val 00:06:34.888 23:51:04 -- accel/accel.sh@20 -- # val= 00:06:34.888 23:51:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.888 23:51:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.888 23:51:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.888 23:51:04 -- accel/accel.sh@20 -- # val= 00:06:34.888 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:34.888 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:34.888 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:34.888 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:34.888 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:34.888 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:34.888 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:34.888 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:34.888 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:34.888 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:34.888 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:34.888 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:34.888 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:34.888 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:34.888 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:34.888 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:34.888 23:51:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.888 23:51:05 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.888 23:51:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.888 00:06:34.888 real 0m1.311s 00:06:34.888 user 0m4.444s 00:06:34.888 sys 0m0.115s 00:06:34.888 23:51:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.888 23:51:05 -- common/autotest_common.sh@10 -- # set +x 00:06:34.888 ************************************ 00:06:34.888 END TEST accel_decomp_mcore 00:06:34.888 ************************************ 00:06:34.888 23:51:05 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:34.888 23:51:05 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:34.888 23:51:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.888 23:51:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.150 ************************************ 00:06:35.150 START TEST accel_decomp_full_mcore 00:06:35.150 ************************************ 00:06:35.150 23:51:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.150 23:51:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.150 23:51:05 -- accel/accel.sh@17 -- # local accel_module 00:06:35.150 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.150 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.150 23:51:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.150 23:51:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.150 23:51:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.150 23:51:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.150 23:51:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.150 23:51:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.150 23:51:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.150 23:51:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.150 23:51:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.150 23:51:05 -- accel/accel.sh@41 -- # jq -r . 00:06:35.150 [2024-04-26 23:51:05.233606] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:35.150 [2024-04-26 23:51:05.233704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204394 ] 00:06:35.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.150 [2024-04-26 23:51:05.300387] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.412 [2024-04-26 23:51:05.375824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.412 [2024-04-26 23:51:05.375945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.412 [2024-04-26 23:51:05.375992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.412 [2024-04-26 23:51:05.375992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val=0xf 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val=decompress 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val=software 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@22 -- # accel_module=software 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val=32 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val=32 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val=1 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val=Yes 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.412 23:51:05 -- accel/accel.sh@20 -- # val= 00:06:35.412 23:51:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.412 23:51:05 -- accel/accel.sh@19 -- # read -r var val 00:06:36.356 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.356 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.357 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.357 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.357 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.357 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.357 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.357 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.357 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.357 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.357 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.357 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.357 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.357 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.357 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.357 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.357 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.357 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.357 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.357 23:51:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.357 23:51:06 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.357 23:51:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.357 00:06:36.357 real 0m1.326s 00:06:36.357 user 0m4.498s 00:06:36.357 sys 0m0.118s 00:06:36.357 23:51:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.357 23:51:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.357 ************************************ 00:06:36.357 END TEST accel_decomp_full_mcore 00:06:36.357 ************************************ 00:06:36.357 23:51:06 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:36.357 23:51:06 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:36.357 23:51:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.357 23:51:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.619 ************************************ 00:06:36.619 START TEST accel_decomp_mthread 00:06:36.619 ************************************ 00:06:36.619 23:51:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:36.619 23:51:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.619 23:51:06 -- accel/accel.sh@17 -- # local accel_module 00:06:36.619 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.619 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.619 23:51:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:36.619 23:51:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:36.619 23:51:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.619 23:51:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.619 23:51:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.619 23:51:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.619 23:51:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.619 23:51:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.619 23:51:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.619 23:51:06 -- accel/accel.sh@41 -- # jq -r . 00:06:36.619 [2024-04-26 23:51:06.756771] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:36.619 [2024-04-26 23:51:06.756857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204672 ] 00:06:36.619 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.619 [2024-04-26 23:51:06.823020] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.881 [2024-04-26 23:51:06.898932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val=0x1 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val=decompress 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val=software 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.881 23:51:06 -- accel/accel.sh@20 -- # val=32 00:06:36.881 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.881 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.882 23:51:06 -- accel/accel.sh@20 -- # val=32 00:06:36.882 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.882 23:51:06 -- accel/accel.sh@20 -- # val=2 00:06:36.882 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.882 23:51:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.882 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.882 23:51:06 -- accel/accel.sh@20 -- # val=Yes 00:06:36.882 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.882 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.882 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:36.882 23:51:06 -- accel/accel.sh@20 -- # val= 00:06:36.882 23:51:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # IFS=: 00:06:36.882 23:51:06 -- accel/accel.sh@19 -- # read -r var val 00:06:37.826 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:37.826 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.826 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:37.826 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.826 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:37.826 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.826 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:37.826 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.826 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:37.826 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.826 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:37.826 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.826 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:37.826 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:37.826 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:37.826 23:51:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.826 23:51:08 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.826 23:51:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.826 00:06:37.826 real 0m1.308s 00:06:37.826 user 0m1.205s 00:06:37.826 sys 0m0.115s 00:06:37.826 23:51:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.826 23:51:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.826 ************************************ 00:06:37.826 END TEST accel_decomp_mthread 00:06:37.826 ************************************ 00:06:38.088 23:51:08 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.088 23:51:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:38.088 23:51:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.088 23:51:08 -- common/autotest_common.sh@10 -- # set +x 00:06:38.088 ************************************ 00:06:38.088 START TEST accel_deomp_full_mthread 00:06:38.088 ************************************ 00:06:38.088 23:51:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.088 23:51:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.088 23:51:08 -- accel/accel.sh@17 -- # local accel_module 00:06:38.088 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.088 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.088 23:51:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.088 23:51:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.088 23:51:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.088 23:51:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.088 23:51:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.088 23:51:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.088 23:51:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.088 23:51:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.088 23:51:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.088 23:51:08 -- accel/accel.sh@41 -- # jq -r . 00:06:38.088 [2024-04-26 23:51:08.260527] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:38.088 [2024-04-26 23:51:08.260623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205141 ] 00:06:38.088 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.350 [2024-04-26 23:51:08.325749] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.350 [2024-04-26 23:51:08.400077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val=0x1 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val=decompress 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val=software 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@22 -- # accel_module=software 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val=32 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val=32 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val=2 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val=Yes 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:38.350 23:51:08 -- accel/accel.sh@20 -- # val= 00:06:38.350 23:51:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # IFS=: 00:06:38.350 23:51:08 -- accel/accel.sh@19 -- # read -r var val 00:06:39.735 23:51:09 -- accel/accel.sh@20 -- # val= 00:06:39.735 23:51:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.735 23:51:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.736 23:51:09 -- accel/accel.sh@20 -- # val= 00:06:39.736 23:51:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.736 23:51:09 -- accel/accel.sh@20 -- # val= 00:06:39.736 23:51:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.736 23:51:09 -- accel/accel.sh@20 -- # val= 00:06:39.736 23:51:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.736 23:51:09 -- accel/accel.sh@20 -- # val= 00:06:39.736 23:51:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.736 23:51:09 -- accel/accel.sh@20 -- # val= 00:06:39.736 23:51:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.736 23:51:09 -- accel/accel.sh@20 -- # val= 00:06:39.736 23:51:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # IFS=: 00:06:39.736 23:51:09 -- accel/accel.sh@19 -- # read -r var val 00:06:39.736 23:51:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.736 23:51:09 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.736 23:51:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.736 00:06:39.736 real 0m1.334s 00:06:39.736 user 0m1.236s 00:06:39.736 sys 0m0.110s 00:06:39.736 23:51:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.736 23:51:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.736 ************************************ 00:06:39.736 END TEST accel_deomp_full_mthread 00:06:39.736 ************************************ 00:06:39.736 23:51:09 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:39.736 23:51:09 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:39.736 23:51:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:39.736 23:51:09 -- accel/accel.sh@137 -- # build_accel_config 00:06:39.736 23:51:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.736 23:51:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.736 23:51:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.736 23:51:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.736 23:51:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.736 23:51:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.736 23:51:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.736 23:51:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.736 23:51:09 -- accel/accel.sh@41 -- # jq -r . 00:06:39.736 ************************************ 00:06:39.736 START TEST accel_dif_functional_tests 00:06:39.736 ************************************ 00:06:39.736 23:51:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:39.736 [2024-04-26 23:51:09.804462] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:39.736 [2024-04-26 23:51:09.804518] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205637 ] 00:06:39.736 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.736 [2024-04-26 23:51:09.868528] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.736 [2024-04-26 23:51:09.945831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.736 [2024-04-26 23:51:09.945983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.736 [2024-04-26 23:51:09.945986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.997 00:06:39.997 00:06:39.997 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.997 http://cunit.sourceforge.net/ 00:06:39.997 00:06:39.997 00:06:39.997 Suite: accel_dif 00:06:39.997 Test: verify: DIF generated, GUARD check ...passed 00:06:39.997 Test: verify: DIF generated, APPTAG check ...passed 00:06:39.997 Test: verify: DIF generated, REFTAG check ...passed 00:06:39.997 Test: verify: DIF not generated, GUARD check ...[2024-04-26 23:51:10.002585] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:39.997 [2024-04-26 23:51:10.002623] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:39.997 passed 00:06:39.997 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 23:51:10.002653] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:39.997 [2024-04-26 23:51:10.002668] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:39.997 passed 00:06:39.997 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 23:51:10.002683] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:39.997 [2024-04-26 23:51:10.002698] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:39.997 passed 00:06:39.997 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:39.997 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 23:51:10.002743] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:39.997 passed 00:06:39.997 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:39.997 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:39.997 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:39.997 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 23:51:10.002862] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:39.997 passed 00:06:39.997 Test: generate copy: DIF generated, GUARD check ...passed 00:06:39.997 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:39.997 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:39.997 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:39.997 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:39.997 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:39.997 Test: generate copy: iovecs-len validate ...[2024-04-26 23:51:10.003061] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:39.997 passed 00:06:39.997 Test: generate copy: buffer alignment validate ...passed 00:06:39.997 00:06:39.997 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.997 suites 1 1 n/a 0 0 00:06:39.997 tests 20 20 20 0 0 00:06:39.997 asserts 204 204 204 0 n/a 00:06:39.997 00:06:39.997 Elapsed time = 0.002 seconds 00:06:39.997 00:06:39.997 real 0m0.368s 00:06:39.997 user 0m0.469s 00:06:39.997 sys 0m0.128s 00:06:39.997 23:51:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.997 23:51:10 -- common/autotest_common.sh@10 -- # set +x 00:06:39.997 ************************************ 00:06:39.997 END TEST accel_dif_functional_tests 00:06:39.997 ************************************ 00:06:39.997 00:06:39.997 real 0m32.642s 00:06:39.997 user 0m34.497s 00:06:39.997 sys 0m5.247s 00:06:39.997 23:51:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.997 23:51:10 -- common/autotest_common.sh@10 -- # set +x 00:06:39.997 ************************************ 00:06:39.997 END TEST accel 00:06:39.997 ************************************ 00:06:39.997 23:51:10 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:39.997 23:51:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.997 23:51:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.997 23:51:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.258 ************************************ 00:06:40.258 START TEST accel_rpc 00:06:40.258 ************************************ 00:06:40.258 23:51:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.258 * Looking for test storage... 00:06:40.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:40.258 23:51:10 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.258 23:51:10 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=206024 00:06:40.258 23:51:10 -- accel/accel_rpc.sh@15 -- # waitforlisten 206024 00:06:40.258 23:51:10 -- common/autotest_common.sh@817 -- # '[' -z 206024 ']' 00:06:40.258 23:51:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.258 23:51:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:40.258 23:51:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.258 23:51:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:40.258 23:51:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.258 23:51:10 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:40.518 [2024-04-26 23:51:10.504868] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:40.518 [2024-04-26 23:51:10.504920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206024 ] 00:06:40.518 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.518 [2024-04-26 23:51:10.566895] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.519 [2024-04-26 23:51:10.635112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.089 23:51:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:41.089 23:51:11 -- common/autotest_common.sh@850 -- # return 0 00:06:41.089 23:51:11 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:41.089 23:51:11 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:41.089 23:51:11 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:41.089 23:51:11 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:41.089 23:51:11 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:41.089 23:51:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.089 23:51:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.089 23:51:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.349 ************************************ 00:06:41.349 START TEST accel_assign_opcode 00:06:41.349 ************************************ 00:06:41.349 23:51:11 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:41.349 23:51:11 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:41.349 23:51:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.349 23:51:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.349 [2024-04-26 23:51:11.381254] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:41.349 23:51:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.349 23:51:11 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:41.349 23:51:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.349 23:51:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.349 [2024-04-26 23:51:11.389268] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:41.349 23:51:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.349 23:51:11 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:41.349 23:51:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.349 23:51:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.349 23:51:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.349 23:51:11 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:41.349 23:51:11 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:41.349 23:51:11 -- accel/accel_rpc.sh@42 -- # grep software 00:06:41.349 23:51:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.349 23:51:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.349 23:51:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.610 software 00:06:41.610 00:06:41.610 real 0m0.206s 00:06:41.610 user 0m0.047s 00:06:41.610 sys 0m0.010s 00:06:41.610 23:51:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:41.610 23:51:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.610 ************************************ 00:06:41.610 END TEST accel_assign_opcode 00:06:41.610 ************************************ 00:06:41.610 23:51:11 -- accel/accel_rpc.sh@55 -- # killprocess 206024 00:06:41.610 23:51:11 -- common/autotest_common.sh@936 -- # '[' -z 206024 ']' 00:06:41.610 23:51:11 -- common/autotest_common.sh@940 -- # kill -0 206024 00:06:41.610 23:51:11 -- common/autotest_common.sh@941 -- # uname 00:06:41.610 23:51:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.610 23:51:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 206024 00:06:41.610 23:51:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.610 23:51:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.610 23:51:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 206024' 00:06:41.610 killing process with pid 206024 00:06:41.610 23:51:11 -- common/autotest_common.sh@955 -- # kill 206024 00:06:41.610 23:51:11 -- common/autotest_common.sh@960 -- # wait 206024 00:06:41.871 00:06:41.871 real 0m1.532s 00:06:41.871 user 0m1.634s 00:06:41.871 sys 0m0.440s 00:06:41.871 23:51:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:41.871 23:51:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.871 ************************************ 00:06:41.871 END TEST accel_rpc 00:06:41.871 ************************************ 00:06:41.871 23:51:11 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:41.871 23:51:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.871 23:51:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.871 23:51:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.871 ************************************ 00:06:41.871 START TEST app_cmdline 00:06:41.871 ************************************ 00:06:41.871 23:51:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.132 * Looking for test storage... 00:06:42.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.132 23:51:12 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:42.132 23:51:12 -- app/cmdline.sh@17 -- # spdk_tgt_pid=206449 00:06:42.132 23:51:12 -- app/cmdline.sh@18 -- # waitforlisten 206449 00:06:42.132 23:51:12 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:42.132 23:51:12 -- common/autotest_common.sh@817 -- # '[' -z 206449 ']' 00:06:42.132 23:51:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.132 23:51:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:42.132 23:51:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.132 23:51:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:42.132 23:51:12 -- common/autotest_common.sh@10 -- # set +x 00:06:42.132 [2024-04-26 23:51:12.230226] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:06:42.132 [2024-04-26 23:51:12.230287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206449 ] 00:06:42.132 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.132 [2024-04-26 23:51:12.294931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.392 [2024-04-26 23:51:12.368866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.962 23:51:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:42.962 23:51:12 -- common/autotest_common.sh@850 -- # return 0 00:06:42.962 23:51:12 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:42.962 { 00:06:42.962 "version": "SPDK v24.05-pre git sha1 f1d799ad0", 00:06:42.962 "fields": { 00:06:42.962 "major": 24, 00:06:42.962 "minor": 5, 00:06:42.962 "patch": 0, 00:06:42.962 "suffix": "-pre", 00:06:42.962 "commit": "f1d799ad0" 00:06:42.962 } 00:06:42.962 } 00:06:42.962 23:51:13 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:42.962 23:51:13 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:42.962 23:51:13 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:42.962 23:51:13 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:42.962 23:51:13 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:42.962 23:51:13 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:42.962 23:51:13 -- app/cmdline.sh@26 -- # sort 00:06:42.962 23:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:42.962 23:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:42.962 23:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.962 23:51:13 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:42.962 23:51:13 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:42.962 23:51:13 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.962 23:51:13 -- common/autotest_common.sh@638 -- # local es=0 00:06:42.962 23:51:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.962 23:51:13 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.962 23:51:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:42.962 23:51:13 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.962 23:51:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:42.962 23:51:13 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.962 23:51:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:42.962 23:51:13 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.962 23:51:13 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:42.962 23:51:13 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.223 request: 00:06:43.223 { 00:06:43.223 "method": "env_dpdk_get_mem_stats", 00:06:43.223 "req_id": 1 00:06:43.223 } 00:06:43.223 Got JSON-RPC error response 00:06:43.223 response: 00:06:43.223 { 00:06:43.223 "code": -32601, 00:06:43.223 "message": "Method not found" 00:06:43.223 } 00:06:43.223 23:51:13 -- common/autotest_common.sh@641 -- # es=1 00:06:43.223 23:51:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:43.223 23:51:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:43.223 23:51:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:43.223 23:51:13 -- app/cmdline.sh@1 -- # killprocess 206449 00:06:43.223 23:51:13 -- common/autotest_common.sh@936 -- # '[' -z 206449 ']' 00:06:43.223 23:51:13 -- common/autotest_common.sh@940 -- # kill -0 206449 00:06:43.223 23:51:13 -- common/autotest_common.sh@941 -- # uname 00:06:43.223 23:51:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.223 23:51:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 206449 00:06:43.223 23:51:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:43.223 23:51:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:43.223 23:51:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 206449' 00:06:43.223 killing process with pid 206449 00:06:43.223 23:51:13 -- common/autotest_common.sh@955 -- # kill 206449 00:06:43.223 23:51:13 -- common/autotest_common.sh@960 -- # wait 206449 00:06:43.484 00:06:43.484 real 0m1.496s 00:06:43.484 user 0m1.768s 00:06:43.484 sys 0m0.381s 00:06:43.484 23:51:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.484 23:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.484 ************************************ 00:06:43.484 END TEST app_cmdline 00:06:43.484 ************************************ 00:06:43.485 23:51:13 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:43.485 23:51:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.485 23:51:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.485 23:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.746 ************************************ 00:06:43.746 START TEST version 00:06:43.746 ************************************ 00:06:43.746 23:51:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:43.746 * Looking for test storage... 00:06:43.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:43.746 23:51:13 -- app/version.sh@17 -- # get_header_version major 00:06:43.746 23:51:13 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.746 23:51:13 -- app/version.sh@14 -- # cut -f2 00:06:43.746 23:51:13 -- app/version.sh@14 -- # tr -d '"' 00:06:43.746 23:51:13 -- app/version.sh@17 -- # major=24 00:06:43.746 23:51:13 -- app/version.sh@18 -- # get_header_version minor 00:06:43.746 23:51:13 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.746 23:51:13 -- app/version.sh@14 -- # cut -f2 00:06:43.746 23:51:13 -- app/version.sh@14 -- # tr -d '"' 00:06:43.746 23:51:13 -- app/version.sh@18 -- # minor=5 00:06:43.746 23:51:13 -- app/version.sh@19 -- # get_header_version patch 00:06:43.746 23:51:13 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.746 23:51:13 -- app/version.sh@14 -- # cut -f2 00:06:43.746 23:51:13 -- app/version.sh@14 -- # tr -d '"' 00:06:43.746 23:51:13 -- app/version.sh@19 -- # patch=0 00:06:43.746 23:51:13 -- app/version.sh@20 -- # get_header_version suffix 00:06:43.746 23:51:13 -- app/version.sh@14 -- # tr -d '"' 00:06:43.746 23:51:13 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:43.746 23:51:13 -- app/version.sh@14 -- # cut -f2 00:06:43.746 23:51:13 -- app/version.sh@20 -- # suffix=-pre 00:06:43.746 23:51:13 -- app/version.sh@22 -- # version=24.5 00:06:43.746 23:51:13 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:43.746 23:51:13 -- app/version.sh@28 -- # version=24.5rc0 00:06:43.746 23:51:13 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:43.747 23:51:13 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:43.747 23:51:13 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:43.747 23:51:13 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:43.747 00:06:43.747 real 0m0.149s 00:06:43.747 user 0m0.067s 00:06:43.747 sys 0m0.114s 00:06:43.747 23:51:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.747 23:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.747 ************************************ 00:06:43.747 END TEST version 00:06:43.747 ************************************ 00:06:43.747 23:51:13 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:43.747 23:51:13 -- spdk/autotest.sh@194 -- # uname -s 00:06:43.747 23:51:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:43.747 23:51:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:43.747 23:51:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:43.747 23:51:13 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:43.747 23:51:13 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:43.747 23:51:13 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:43.747 23:51:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:43.747 23:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.747 23:51:13 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:43.747 23:51:13 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:43.747 23:51:13 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:43.747 23:51:13 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:43.747 23:51:13 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:43.747 23:51:13 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:43.747 23:51:13 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:43.747 23:51:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:43.747 23:51:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.747 23:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:44.009 ************************************ 00:06:44.009 START TEST nvmf_tcp 00:06:44.009 ************************************ 00:06:44.009 23:51:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.009 * Looking for test storage... 00:06:44.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:44.009 23:51:14 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:44.009 23:51:14 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:44.009 23:51:14 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.009 23:51:14 -- nvmf/common.sh@7 -- # uname -s 00:06:44.009 23:51:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.009 23:51:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.009 23:51:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.009 23:51:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.009 23:51:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.009 23:51:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.009 23:51:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.009 23:51:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.009 23:51:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.009 23:51:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.009 23:51:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:44.009 23:51:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:44.009 23:51:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.009 23:51:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.009 23:51:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.009 23:51:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.009 23:51:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.009 23:51:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.009 23:51:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.009 23:51:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.009 23:51:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.009 23:51:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.009 23:51:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.009 23:51:14 -- paths/export.sh@5 -- # export PATH 00:06:44.009 23:51:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.009 23:51:14 -- nvmf/common.sh@47 -- # : 0 00:06:44.009 23:51:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.009 23:51:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.009 23:51:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.009 23:51:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.009 23:51:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.009 23:51:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.009 23:51:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.009 23:51:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.009 23:51:14 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:44.009 23:51:14 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:44.009 23:51:14 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:44.009 23:51:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:44.009 23:51:14 -- common/autotest_common.sh@10 -- # set +x 00:06:44.009 23:51:14 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:44.009 23:51:14 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:44.009 23:51:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:44.009 23:51:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.009 23:51:14 -- common/autotest_common.sh@10 -- # set +x 00:06:44.271 ************************************ 00:06:44.271 START TEST nvmf_example 00:06:44.271 ************************************ 00:06:44.271 23:51:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:44.271 * Looking for test storage... 00:06:44.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.271 23:51:14 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.271 23:51:14 -- nvmf/common.sh@7 -- # uname -s 00:06:44.532 23:51:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.532 23:51:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.532 23:51:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.532 23:51:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.532 23:51:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.532 23:51:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.532 23:51:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.532 23:51:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.532 23:51:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.532 23:51:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.532 23:51:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:44.532 23:51:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:44.532 23:51:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.532 23:51:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.532 23:51:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.532 23:51:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.532 23:51:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.532 23:51:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.532 23:51:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.532 23:51:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.532 23:51:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.532 23:51:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.532 23:51:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.532 23:51:14 -- paths/export.sh@5 -- # export PATH 00:06:44.532 23:51:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.532 23:51:14 -- nvmf/common.sh@47 -- # : 0 00:06:44.532 23:51:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.532 23:51:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.532 23:51:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.532 23:51:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.532 23:51:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.532 23:51:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.532 23:51:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.532 23:51:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.532 23:51:14 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:44.532 23:51:14 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:44.532 23:51:14 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:44.533 23:51:14 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:44.533 23:51:14 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:44.533 23:51:14 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:44.533 23:51:14 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:44.533 23:51:14 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:44.533 23:51:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:44.533 23:51:14 -- common/autotest_common.sh@10 -- # set +x 00:06:44.533 23:51:14 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:44.533 23:51:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:44.533 23:51:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.533 23:51:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:44.533 23:51:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:44.533 23:51:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:44.533 23:51:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.533 23:51:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.533 23:51:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.533 23:51:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:44.533 23:51:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:44.533 23:51:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:44.533 23:51:14 -- common/autotest_common.sh@10 -- # set +x 00:06:52.680 23:51:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:52.680 23:51:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:52.680 23:51:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:52.680 23:51:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:52.680 23:51:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:52.680 23:51:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:52.680 23:51:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:52.680 23:51:21 -- nvmf/common.sh@295 -- # net_devs=() 00:06:52.680 23:51:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:52.680 23:51:21 -- nvmf/common.sh@296 -- # e810=() 00:06:52.680 23:51:21 -- nvmf/common.sh@296 -- # local -ga e810 00:06:52.680 23:51:21 -- nvmf/common.sh@297 -- # x722=() 00:06:52.680 23:51:21 -- nvmf/common.sh@297 -- # local -ga x722 00:06:52.680 23:51:21 -- nvmf/common.sh@298 -- # mlx=() 00:06:52.680 23:51:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:52.680 23:51:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.680 23:51:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.680 23:51:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.680 23:51:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.680 23:51:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.680 23:51:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.680 23:51:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.680 23:51:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.680 23:51:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.680 23:51:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.680 23:51:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.680 23:51:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:52.680 23:51:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:52.680 23:51:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:52.680 23:51:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.680 23:51:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:52.680 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:52.680 23:51:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.680 23:51:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:52.680 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:52.680 23:51:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:52.680 23:51:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.680 23:51:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.680 23:51:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:52.680 23:51:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.680 23:51:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:52.680 Found net devices under 0000:31:00.0: cvl_0_0 00:06:52.680 23:51:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.680 23:51:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.680 23:51:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.680 23:51:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:52.680 23:51:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.680 23:51:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:52.680 Found net devices under 0000:31:00.1: cvl_0_1 00:06:52.680 23:51:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.680 23:51:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:52.680 23:51:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:52.680 23:51:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:52.680 23:51:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.680 23:51:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.680 23:51:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.680 23:51:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:52.680 23:51:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.680 23:51:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.680 23:51:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:52.680 23:51:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.680 23:51:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.680 23:51:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:52.680 23:51:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:52.680 23:51:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.680 23:51:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.680 23:51:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.680 23:51:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.680 23:51:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:52.680 23:51:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.680 23:51:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.680 23:51:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.680 23:51:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:52.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:06:52.680 00:06:52.680 --- 10.0.0.2 ping statistics --- 00:06:52.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.680 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:06:52.680 23:51:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:06:52.680 00:06:52.680 --- 10.0.0.1 ping statistics --- 00:06:52.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.680 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:06:52.680 23:51:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.680 23:51:21 -- nvmf/common.sh@411 -- # return 0 00:06:52.680 23:51:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:52.680 23:51:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.680 23:51:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:52.680 23:51:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.680 23:51:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:52.680 23:51:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:52.680 23:51:21 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:52.680 23:51:21 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:52.680 23:51:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:52.680 23:51:21 -- common/autotest_common.sh@10 -- # set +x 00:06:52.680 23:51:21 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:52.680 23:51:21 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:52.680 23:51:21 -- target/nvmf_example.sh@34 -- # nvmfpid=210713 00:06:52.680 23:51:21 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:52.680 23:51:21 -- target/nvmf_example.sh@36 -- # waitforlisten 210713 00:06:52.680 23:51:21 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:52.680 23:51:21 -- common/autotest_common.sh@817 -- # '[' -z 210713 ']' 00:06:52.680 23:51:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.680 23:51:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:52.680 23:51:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.680 23:51:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:52.680 23:51:21 -- common/autotest_common.sh@10 -- # set +x 00:06:52.680 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.680 23:51:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:52.680 23:51:22 -- common/autotest_common.sh@850 -- # return 0 00:06:52.680 23:51:22 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:52.680 23:51:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:52.680 23:51:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.680 23:51:22 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:52.680 23:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.680 23:51:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.680 23:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.680 23:51:22 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:52.680 23:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.680 23:51:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.680 23:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.680 23:51:22 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:52.680 23:51:22 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:52.680 23:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.680 23:51:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.680 23:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.680 23:51:22 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:52.680 23:51:22 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:52.680 23:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.680 23:51:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.680 23:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.680 23:51:22 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.680 23:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.680 23:51:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.680 23:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.680 23:51:22 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:52.680 23:51:22 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:52.680 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.918 Initializing NVMe Controllers 00:07:04.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:04.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:04.918 Initialization complete. Launching workers. 00:07:04.918 ======================================================== 00:07:04.918 Latency(us) 00:07:04.918 Device Information : IOPS MiB/s Average min max 00:07:04.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18031.21 70.43 3549.53 623.73 15304.66 00:07:04.918 ======================================================== 00:07:04.918 Total : 18031.21 70.43 3549.53 623.73 15304.66 00:07:04.918 00:07:04.918 23:51:33 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:04.918 23:51:33 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:04.918 23:51:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:04.918 23:51:33 -- nvmf/common.sh@117 -- # sync 00:07:04.918 23:51:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:04.918 23:51:33 -- nvmf/common.sh@120 -- # set +e 00:07:04.918 23:51:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:04.918 23:51:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:04.918 rmmod nvme_tcp 00:07:04.918 rmmod nvme_fabrics 00:07:04.918 rmmod nvme_keyring 00:07:04.918 23:51:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:04.918 23:51:33 -- nvmf/common.sh@124 -- # set -e 00:07:04.918 23:51:33 -- nvmf/common.sh@125 -- # return 0 00:07:04.918 23:51:33 -- nvmf/common.sh@478 -- # '[' -n 210713 ']' 00:07:04.918 23:51:33 -- nvmf/common.sh@479 -- # killprocess 210713 00:07:04.918 23:51:33 -- common/autotest_common.sh@936 -- # '[' -z 210713 ']' 00:07:04.918 23:51:33 -- common/autotest_common.sh@940 -- # kill -0 210713 00:07:04.918 23:51:33 -- common/autotest_common.sh@941 -- # uname 00:07:04.918 23:51:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:04.918 23:51:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 210713 00:07:04.918 23:51:33 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:04.918 23:51:33 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:04.918 23:51:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 210713' 00:07:04.918 killing process with pid 210713 00:07:04.918 23:51:33 -- common/autotest_common.sh@955 -- # kill 210713 00:07:04.918 23:51:33 -- common/autotest_common.sh@960 -- # wait 210713 00:07:04.918 nvmf threads initialize successfully 00:07:04.918 bdev subsystem init successfully 00:07:04.918 created a nvmf target service 00:07:04.918 create targets's poll groups done 00:07:04.918 all subsystems of target started 00:07:04.918 nvmf target is running 00:07:04.918 all subsystems of target stopped 00:07:04.918 destroy targets's poll groups done 00:07:04.918 destroyed the nvmf target service 00:07:04.918 bdev subsystem finish successfully 00:07:04.918 nvmf threads destroy successfully 00:07:04.918 23:51:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:04.918 23:51:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:04.918 23:51:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:04.918 23:51:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:04.918 23:51:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:04.918 23:51:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.918 23:51:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.918 23:51:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.179 23:51:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.179 23:51:35 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:05.179 23:51:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:05.179 23:51:35 -- common/autotest_common.sh@10 -- # set +x 00:07:05.440 00:07:05.440 real 0m21.025s 00:07:05.440 user 0m46.317s 00:07:05.440 sys 0m6.518s 00:07:05.440 23:51:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.440 23:51:35 -- common/autotest_common.sh@10 -- # set +x 00:07:05.440 ************************************ 00:07:05.440 END TEST nvmf_example 00:07:05.440 ************************************ 00:07:05.440 23:51:35 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:05.440 23:51:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:05.440 23:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.440 23:51:35 -- common/autotest_common.sh@10 -- # set +x 00:07:05.440 ************************************ 00:07:05.440 START TEST nvmf_filesystem 00:07:05.440 ************************************ 00:07:05.440 23:51:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:05.704 * Looking for test storage... 00:07:05.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.704 23:51:35 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:05.704 23:51:35 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:05.704 23:51:35 -- common/autotest_common.sh@34 -- # set -e 00:07:05.704 23:51:35 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:05.704 23:51:35 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:05.704 23:51:35 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:05.704 23:51:35 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:05.704 23:51:35 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:05.704 23:51:35 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:05.704 23:51:35 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:05.704 23:51:35 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:05.704 23:51:35 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:05.704 23:51:35 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:05.704 23:51:35 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:05.704 23:51:35 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:05.704 23:51:35 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:05.704 23:51:35 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:05.704 23:51:35 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:05.704 23:51:35 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:05.704 23:51:35 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:05.704 23:51:35 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:05.704 23:51:35 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:05.704 23:51:35 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:05.704 23:51:35 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:05.704 23:51:35 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:05.704 23:51:35 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:05.704 23:51:35 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:05.704 23:51:35 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:05.704 23:51:35 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:05.704 23:51:35 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:05.704 23:51:35 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:05.704 23:51:35 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:05.704 23:51:35 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:05.704 23:51:35 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:05.704 23:51:35 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:05.704 23:51:35 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:05.704 23:51:35 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:05.704 23:51:35 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:05.704 23:51:35 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:05.704 23:51:35 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:05.704 23:51:35 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:05.704 23:51:35 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:05.704 23:51:35 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:05.704 23:51:35 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:05.704 23:51:35 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:05.704 23:51:35 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:05.704 23:51:35 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:05.704 23:51:35 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:05.704 23:51:35 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:05.704 23:51:35 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:05.704 23:51:35 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:05.704 23:51:35 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:05.704 23:51:35 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:05.704 23:51:35 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:05.704 23:51:35 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:05.704 23:51:35 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:05.704 23:51:35 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:05.704 23:51:35 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:05.704 23:51:35 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:05.704 23:51:35 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:05.704 23:51:35 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:05.704 23:51:35 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:05.704 23:51:35 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:05.704 23:51:35 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:05.704 23:51:35 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:05.704 23:51:35 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:05.704 23:51:35 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:05.704 23:51:35 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:05.704 23:51:35 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:05.704 23:51:35 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:07:05.704 23:51:35 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:05.704 23:51:35 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:05.704 23:51:35 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:05.704 23:51:35 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:05.704 23:51:35 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:05.704 23:51:35 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:05.704 23:51:35 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:05.704 23:51:35 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:05.704 23:51:35 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:05.704 23:51:35 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:05.704 23:51:35 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:05.704 23:51:35 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:05.704 23:51:35 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:05.704 23:51:35 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:05.704 23:51:35 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:05.704 23:51:35 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:05.704 23:51:35 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:05.704 23:51:35 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:05.704 23:51:35 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:05.704 23:51:35 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:05.704 23:51:35 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:05.704 23:51:35 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:05.704 23:51:35 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:05.704 23:51:35 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:05.704 23:51:35 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:05.704 23:51:35 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.704 23:51:35 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.704 23:51:35 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.704 23:51:35 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:05.704 23:51:35 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:05.704 23:51:35 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:05.704 23:51:35 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:05.704 23:51:35 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:05.704 23:51:35 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:05.704 23:51:35 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:05.704 23:51:35 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:05.704 #define SPDK_CONFIG_H 00:07:05.704 #define SPDK_CONFIG_APPS 1 00:07:05.704 #define SPDK_CONFIG_ARCH native 00:07:05.704 #undef SPDK_CONFIG_ASAN 00:07:05.704 #undef SPDK_CONFIG_AVAHI 00:07:05.704 #undef SPDK_CONFIG_CET 00:07:05.704 #define SPDK_CONFIG_COVERAGE 1 00:07:05.704 #define SPDK_CONFIG_CROSS_PREFIX 00:07:05.704 #undef SPDK_CONFIG_CRYPTO 00:07:05.704 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:05.704 #undef SPDK_CONFIG_CUSTOMOCF 00:07:05.704 #undef SPDK_CONFIG_DAOS 00:07:05.704 #define SPDK_CONFIG_DAOS_DIR 00:07:05.704 #define SPDK_CONFIG_DEBUG 1 00:07:05.704 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:05.704 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:05.704 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:05.704 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:05.704 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:05.704 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:05.704 #define SPDK_CONFIG_EXAMPLES 1 00:07:05.704 #undef SPDK_CONFIG_FC 00:07:05.704 #define SPDK_CONFIG_FC_PATH 00:07:05.704 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:05.704 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:05.704 #undef SPDK_CONFIG_FUSE 00:07:05.704 #undef SPDK_CONFIG_FUZZER 00:07:05.704 #define SPDK_CONFIG_FUZZER_LIB 00:07:05.704 #undef SPDK_CONFIG_GOLANG 00:07:05.704 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:05.704 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:05.704 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:05.704 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:05.704 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:05.704 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:05.704 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:05.704 #define SPDK_CONFIG_IDXD 1 00:07:05.704 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:05.704 #undef SPDK_CONFIG_IPSEC_MB 00:07:05.704 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:05.704 #define SPDK_CONFIG_ISAL 1 00:07:05.704 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:05.704 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:05.704 #define SPDK_CONFIG_LIBDIR 00:07:05.704 #undef SPDK_CONFIG_LTO 00:07:05.704 #define SPDK_CONFIG_MAX_LCORES 00:07:05.704 #define SPDK_CONFIG_NVME_CUSE 1 00:07:05.704 #undef SPDK_CONFIG_OCF 00:07:05.704 #define SPDK_CONFIG_OCF_PATH 00:07:05.704 #define SPDK_CONFIG_OPENSSL_PATH 00:07:05.704 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:05.704 #define SPDK_CONFIG_PGO_DIR 00:07:05.704 #undef SPDK_CONFIG_PGO_USE 00:07:05.704 #define SPDK_CONFIG_PREFIX /usr/local 00:07:05.704 #undef SPDK_CONFIG_RAID5F 00:07:05.704 #undef SPDK_CONFIG_RBD 00:07:05.704 #define SPDK_CONFIG_RDMA 1 00:07:05.704 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:05.704 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:05.704 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:05.704 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:05.704 #define SPDK_CONFIG_SHARED 1 00:07:05.704 #undef SPDK_CONFIG_SMA 00:07:05.704 #define SPDK_CONFIG_TESTS 1 00:07:05.704 #undef SPDK_CONFIG_TSAN 00:07:05.704 #define SPDK_CONFIG_UBLK 1 00:07:05.704 #define SPDK_CONFIG_UBSAN 1 00:07:05.704 #undef SPDK_CONFIG_UNIT_TESTS 00:07:05.704 #undef SPDK_CONFIG_URING 00:07:05.704 #define SPDK_CONFIG_URING_PATH 00:07:05.704 #undef SPDK_CONFIG_URING_ZNS 00:07:05.704 #undef SPDK_CONFIG_USDT 00:07:05.704 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:05.704 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:05.704 #define SPDK_CONFIG_VFIO_USER 1 00:07:05.704 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:05.704 #define SPDK_CONFIG_VHOST 1 00:07:05.704 #define SPDK_CONFIG_VIRTIO 1 00:07:05.704 #undef SPDK_CONFIG_VTUNE 00:07:05.704 #define SPDK_CONFIG_VTUNE_DIR 00:07:05.704 #define SPDK_CONFIG_WERROR 1 00:07:05.704 #define SPDK_CONFIG_WPDK_DIR 00:07:05.704 #undef SPDK_CONFIG_XNVME 00:07:05.704 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:05.704 23:51:35 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:05.704 23:51:35 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.704 23:51:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.704 23:51:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.704 23:51:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.704 23:51:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.704 23:51:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.705 23:51:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.705 23:51:35 -- paths/export.sh@5 -- # export PATH 00:07:05.705 23:51:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.705 23:51:35 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:05.705 23:51:35 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:05.705 23:51:35 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:05.705 23:51:35 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:05.705 23:51:35 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:05.705 23:51:35 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:05.705 23:51:35 -- pm/common@67 -- # TEST_TAG=N/A 00:07:05.705 23:51:35 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:05.705 23:51:35 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:05.705 23:51:35 -- pm/common@71 -- # uname -s 00:07:05.705 23:51:35 -- pm/common@71 -- # PM_OS=Linux 00:07:05.705 23:51:35 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:05.705 23:51:35 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:07:05.705 23:51:35 -- pm/common@76 -- # [[ Linux == Linux ]] 00:07:05.705 23:51:35 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:07:05.705 23:51:35 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:07:05.705 23:51:35 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:05.705 23:51:35 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:05.705 23:51:35 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:07:05.705 23:51:35 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:07:05.705 23:51:35 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:05.705 23:51:35 -- common/autotest_common.sh@57 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:05.705 23:51:35 -- common/autotest_common.sh@61 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:05.705 23:51:35 -- common/autotest_common.sh@63 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:05.705 23:51:35 -- common/autotest_common.sh@65 -- # : 1 00:07:05.705 23:51:35 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:05.705 23:51:35 -- common/autotest_common.sh@67 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:05.705 23:51:35 -- common/autotest_common.sh@69 -- # : 00:07:05.705 23:51:35 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:05.705 23:51:35 -- common/autotest_common.sh@71 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:05.705 23:51:35 -- common/autotest_common.sh@73 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:05.705 23:51:35 -- common/autotest_common.sh@75 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:05.705 23:51:35 -- common/autotest_common.sh@77 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:05.705 23:51:35 -- common/autotest_common.sh@79 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:05.705 23:51:35 -- common/autotest_common.sh@81 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:05.705 23:51:35 -- common/autotest_common.sh@83 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:05.705 23:51:35 -- common/autotest_common.sh@85 -- # : 1 00:07:05.705 23:51:35 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:05.705 23:51:35 -- common/autotest_common.sh@87 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:05.705 23:51:35 -- common/autotest_common.sh@89 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:05.705 23:51:35 -- common/autotest_common.sh@91 -- # : 1 00:07:05.705 23:51:35 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:05.705 23:51:35 -- common/autotest_common.sh@93 -- # : 1 00:07:05.705 23:51:35 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:05.705 23:51:35 -- common/autotest_common.sh@95 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:05.705 23:51:35 -- common/autotest_common.sh@97 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:05.705 23:51:35 -- common/autotest_common.sh@99 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:05.705 23:51:35 -- common/autotest_common.sh@101 -- # : tcp 00:07:05.705 23:51:35 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:05.705 23:51:35 -- common/autotest_common.sh@103 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:05.705 23:51:35 -- common/autotest_common.sh@105 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:05.705 23:51:35 -- common/autotest_common.sh@107 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:05.705 23:51:35 -- common/autotest_common.sh@109 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:05.705 23:51:35 -- common/autotest_common.sh@111 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:05.705 23:51:35 -- common/autotest_common.sh@113 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:05.705 23:51:35 -- common/autotest_common.sh@115 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:05.705 23:51:35 -- common/autotest_common.sh@117 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:05.705 23:51:35 -- common/autotest_common.sh@119 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:05.705 23:51:35 -- common/autotest_common.sh@121 -- # : 1 00:07:05.705 23:51:35 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:05.705 23:51:35 -- common/autotest_common.sh@123 -- # : 00:07:05.705 23:51:35 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:05.705 23:51:35 -- common/autotest_common.sh@125 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:05.705 23:51:35 -- common/autotest_common.sh@127 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:05.705 23:51:35 -- common/autotest_common.sh@129 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:05.705 23:51:35 -- common/autotest_common.sh@131 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:05.705 23:51:35 -- common/autotest_common.sh@133 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:05.705 23:51:35 -- common/autotest_common.sh@135 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:05.705 23:51:35 -- common/autotest_common.sh@137 -- # : 00:07:05.705 23:51:35 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:05.705 23:51:35 -- common/autotest_common.sh@139 -- # : true 00:07:05.705 23:51:35 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:05.705 23:51:35 -- common/autotest_common.sh@141 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:05.705 23:51:35 -- common/autotest_common.sh@143 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:05.705 23:51:35 -- common/autotest_common.sh@145 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:05.705 23:51:35 -- common/autotest_common.sh@147 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:05.705 23:51:35 -- common/autotest_common.sh@149 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:05.705 23:51:35 -- common/autotest_common.sh@151 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:05.705 23:51:35 -- common/autotest_common.sh@153 -- # : e810 00:07:05.705 23:51:35 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:05.705 23:51:35 -- common/autotest_common.sh@155 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:05.705 23:51:35 -- common/autotest_common.sh@157 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:05.705 23:51:35 -- common/autotest_common.sh@159 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:05.705 23:51:35 -- common/autotest_common.sh@161 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:05.705 23:51:35 -- common/autotest_common.sh@163 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:05.705 23:51:35 -- common/autotest_common.sh@166 -- # : 00:07:05.705 23:51:35 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:05.705 23:51:35 -- common/autotest_common.sh@168 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:05.705 23:51:35 -- common/autotest_common.sh@170 -- # : 0 00:07:05.705 23:51:35 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:05.705 23:51:35 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:05.705 23:51:35 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:05.705 23:51:35 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:05.705 23:51:35 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:05.705 23:51:35 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.705 23:51:35 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.705 23:51:35 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.705 23:51:35 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.705 23:51:35 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:05.705 23:51:35 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:05.705 23:51:35 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.705 23:51:35 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.705 23:51:35 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:05.705 23:51:35 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:05.705 23:51:35 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:05.705 23:51:35 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:05.705 23:51:35 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:05.705 23:51:35 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:05.705 23:51:35 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:05.705 23:51:35 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:05.705 23:51:35 -- common/autotest_common.sh@199 -- # cat 00:07:05.705 23:51:35 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:07:05.705 23:51:35 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:05.705 23:51:35 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:05.705 23:51:35 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:05.705 23:51:35 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:05.705 23:51:35 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:07:05.705 23:51:35 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:07:05.706 23:51:35 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.706 23:51:35 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.706 23:51:35 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.706 23:51:35 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.706 23:51:35 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:05.706 23:51:35 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:05.706 23:51:35 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:05.706 23:51:35 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:05.706 23:51:35 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:05.706 23:51:35 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:05.706 23:51:35 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:05.706 23:51:35 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:05.706 23:51:35 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:07:05.706 23:51:35 -- common/autotest_common.sh@252 -- # export valgrind= 00:07:05.706 23:51:35 -- common/autotest_common.sh@252 -- # valgrind= 00:07:05.706 23:51:35 -- common/autotest_common.sh@258 -- # uname -s 00:07:05.706 23:51:35 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:07:05.706 23:51:35 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:07:05.706 23:51:35 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:07:05.706 23:51:35 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:07:05.706 23:51:35 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:05.706 23:51:35 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:05.706 23:51:35 -- common/autotest_common.sh@268 -- # MAKE=make 00:07:05.706 23:51:35 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j144 00:07:05.706 23:51:35 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:07:05.706 23:51:35 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:07:05.706 23:51:35 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:05.706 23:51:35 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:05.706 23:51:35 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:05.706 23:51:35 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:05.706 23:51:35 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:07:05.706 23:51:35 -- common/autotest_common.sh@307 -- # [[ -z 213722 ]] 00:07:05.706 23:51:35 -- common/autotest_common.sh@307 -- # kill -0 213722 00:07:05.706 23:51:35 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:05.706 23:51:35 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:05.706 23:51:35 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:05.706 23:51:35 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:05.706 23:51:35 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:05.706 23:51:35 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:05.706 23:51:35 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:05.706 23:51:35 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:05.706 23:51:35 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.4Z7sw4 00:07:05.706 23:51:35 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:05.706 23:51:35 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:05.706 23:51:35 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:05.706 23:51:35 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4Z7sw4/tests/target /tmp/spdk.4Z7sw4 00:07:05.706 23:51:35 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:05.706 23:51:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:05.706 23:51:35 -- common/autotest_common.sh@316 -- # df -T 00:07:05.706 23:51:35 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:05.706 23:51:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:05.706 23:51:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:05.706 23:51:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:07:05.706 23:51:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=121721819136 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=129371000832 00:07:05.706 23:51:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=7649181696 00:07:05.706 23:51:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=64680787968 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685498368 00:07:05.706 23:51:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=4710400 00:07:05.706 23:51:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=25864454144 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=25874202624 00:07:05.706 23:51:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=9748480 00:07:05.706 23:51:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=efivarfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=efivarfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=189440 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=507904 00:07:05.706 23:51:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=314368 00:07:05.706 23:51:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=64684863488 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685502464 00:07:05.706 23:51:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=638976 00:07:05.706 23:51:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=12937093120 00:07:05.706 23:51:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12937097216 00:07:05.706 23:51:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:05.706 23:51:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:05.706 23:51:35 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:05.706 * Looking for test storage... 00:07:05.706 23:51:35 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:05.706 23:51:35 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:05.706 23:51:35 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.706 23:51:35 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:05.706 23:51:35 -- common/autotest_common.sh@361 -- # mount=/ 00:07:05.706 23:51:35 -- common/autotest_common.sh@363 -- # target_space=121721819136 00:07:05.706 23:51:35 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:05.706 23:51:35 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:05.706 23:51:35 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:05.706 23:51:35 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:05.706 23:51:35 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:05.706 23:51:35 -- common/autotest_common.sh@370 -- # new_size=9863774208 00:07:05.706 23:51:35 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:05.706 23:51:35 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.706 23:51:35 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.706 23:51:35 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.706 23:51:35 -- common/autotest_common.sh@378 -- # return 0 00:07:05.706 23:51:35 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:05.706 23:51:35 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:05.706 23:51:35 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:05.706 23:51:35 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:05.706 23:51:35 -- common/autotest_common.sh@1673 -- # true 00:07:05.706 23:51:35 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:05.706 23:51:35 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:05.706 23:51:35 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:05.706 23:51:35 -- common/autotest_common.sh@27 -- # exec 00:07:05.706 23:51:35 -- common/autotest_common.sh@29 -- # exec 00:07:05.706 23:51:35 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:05.706 23:51:35 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:05.706 23:51:35 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:05.706 23:51:35 -- common/autotest_common.sh@18 -- # set -x 00:07:05.706 23:51:35 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.706 23:51:35 -- nvmf/common.sh@7 -- # uname -s 00:07:05.706 23:51:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.706 23:51:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.706 23:51:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.706 23:51:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.706 23:51:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.706 23:51:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.706 23:51:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.706 23:51:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.706 23:51:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.706 23:51:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.706 23:51:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:05.706 23:51:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:05.706 23:51:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.706 23:51:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.706 23:51:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.706 23:51:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.706 23:51:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.706 23:51:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.706 23:51:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.706 23:51:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.706 23:51:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.706 23:51:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.706 23:51:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.706 23:51:35 -- paths/export.sh@5 -- # export PATH 00:07:05.706 23:51:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.706 23:51:35 -- nvmf/common.sh@47 -- # : 0 00:07:05.706 23:51:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.706 23:51:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.706 23:51:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.706 23:51:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.706 23:51:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.707 23:51:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.707 23:51:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.707 23:51:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.707 23:51:35 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:05.707 23:51:35 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:05.707 23:51:35 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:05.707 23:51:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:05.707 23:51:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.707 23:51:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:05.707 23:51:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:05.707 23:51:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:05.707 23:51:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.707 23:51:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.707 23:51:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.968 23:51:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:05.968 23:51:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:05.968 23:51:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.968 23:51:35 -- common/autotest_common.sh@10 -- # set +x 00:07:14.151 23:51:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:14.151 23:51:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:14.151 23:51:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:14.151 23:51:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:14.151 23:51:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:14.151 23:51:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:14.151 23:51:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:14.151 23:51:42 -- nvmf/common.sh@295 -- # net_devs=() 00:07:14.151 23:51:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:14.151 23:51:42 -- nvmf/common.sh@296 -- # e810=() 00:07:14.151 23:51:42 -- nvmf/common.sh@296 -- # local -ga e810 00:07:14.151 23:51:42 -- nvmf/common.sh@297 -- # x722=() 00:07:14.151 23:51:42 -- nvmf/common.sh@297 -- # local -ga x722 00:07:14.151 23:51:42 -- nvmf/common.sh@298 -- # mlx=() 00:07:14.151 23:51:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:14.151 23:51:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.151 23:51:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.151 23:51:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.151 23:51:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.151 23:51:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.151 23:51:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.151 23:51:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.151 23:51:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.151 23:51:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.151 23:51:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.151 23:51:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.151 23:51:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:14.151 23:51:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:14.151 23:51:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:14.151 23:51:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.151 23:51:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:14.151 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:14.151 23:51:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.151 23:51:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:14.151 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:14.151 23:51:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:14.151 23:51:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.151 23:51:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.151 23:51:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:14.151 23:51:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.151 23:51:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:14.151 Found net devices under 0000:31:00.0: cvl_0_0 00:07:14.151 23:51:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.151 23:51:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.151 23:51:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.151 23:51:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:14.151 23:51:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.151 23:51:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:14.151 Found net devices under 0000:31:00.1: cvl_0_1 00:07:14.151 23:51:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.151 23:51:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:14.151 23:51:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:14.151 23:51:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:14.151 23:51:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:14.151 23:51:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.151 23:51:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.151 23:51:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.151 23:51:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:14.151 23:51:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.151 23:51:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.151 23:51:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:14.151 23:51:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.151 23:51:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.151 23:51:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:14.151 23:51:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:14.151 23:51:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.151 23:51:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.151 23:51:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.151 23:51:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.151 23:51:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:14.151 23:51:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.151 23:51:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.151 23:51:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.151 23:51:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:14.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:07:14.151 00:07:14.151 --- 10.0.0.2 ping statistics --- 00:07:14.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.151 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:07:14.152 23:51:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:07:14.152 00:07:14.152 --- 10.0.0.1 ping statistics --- 00:07:14.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.152 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:07:14.152 23:51:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.152 23:51:43 -- nvmf/common.sh@411 -- # return 0 00:07:14.152 23:51:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:14.152 23:51:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.152 23:51:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:14.152 23:51:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:14.152 23:51:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.152 23:51:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:14.152 23:51:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:14.152 23:51:43 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:14.152 23:51:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:14.152 23:51:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.152 23:51:43 -- common/autotest_common.sh@10 -- # set +x 00:07:14.152 ************************************ 00:07:14.152 START TEST nvmf_filesystem_no_in_capsule 00:07:14.152 ************************************ 00:07:14.152 23:51:43 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:14.152 23:51:43 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:14.152 23:51:43 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:14.152 23:51:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:14.152 23:51:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:14.152 23:51:43 -- common/autotest_common.sh@10 -- # set +x 00:07:14.152 23:51:43 -- nvmf/common.sh@470 -- # nvmfpid=217469 00:07:14.152 23:51:43 -- nvmf/common.sh@471 -- # waitforlisten 217469 00:07:14.152 23:51:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:14.152 23:51:43 -- common/autotest_common.sh@817 -- # '[' -z 217469 ']' 00:07:14.152 23:51:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.152 23:51:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:14.152 23:51:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.152 23:51:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:14.152 23:51:43 -- common/autotest_common.sh@10 -- # set +x 00:07:14.152 [2024-04-26 23:51:43.495294] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:07:14.152 [2024-04-26 23:51:43.495350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.152 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.152 [2024-04-26 23:51:43.565950] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.152 [2024-04-26 23:51:43.642482] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.152 [2024-04-26 23:51:43.642523] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.152 [2024-04-26 23:51:43.642531] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.152 [2024-04-26 23:51:43.642537] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.152 [2024-04-26 23:51:43.642543] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.152 [2024-04-26 23:51:43.642649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.152 [2024-04-26 23:51:43.642783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.152 [2024-04-26 23:51:43.642941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.152 [2024-04-26 23:51:43.642941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.152 23:51:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:14.152 23:51:44 -- common/autotest_common.sh@850 -- # return 0 00:07:14.152 23:51:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:14.152 23:51:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:14.152 23:51:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.152 23:51:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.152 23:51:44 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:14.152 23:51:44 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:14.152 23:51:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.152 23:51:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.152 [2024-04-26 23:51:44.316497] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.152 23:51:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.152 23:51:44 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:14.152 23:51:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.152 23:51:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.414 Malloc1 00:07:14.414 23:51:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.414 23:51:44 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:14.414 23:51:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.414 23:51:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.414 23:51:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.414 23:51:44 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.414 23:51:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.414 23:51:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.414 23:51:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.414 23:51:44 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.414 23:51:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.414 23:51:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.414 [2024-04-26 23:51:44.447306] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.414 23:51:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.414 23:51:44 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:14.414 23:51:44 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:14.414 23:51:44 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:14.414 23:51:44 -- common/autotest_common.sh@1366 -- # local bs 00:07:14.414 23:51:44 -- common/autotest_common.sh@1367 -- # local nb 00:07:14.414 23:51:44 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:14.414 23:51:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.414 23:51:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.414 23:51:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.414 23:51:44 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:14.414 { 00:07:14.414 "name": "Malloc1", 00:07:14.414 "aliases": [ 00:07:14.414 "b5cd8648-e08f-4b5e-b3e3-463cc8edb88c" 00:07:14.414 ], 00:07:14.414 "product_name": "Malloc disk", 00:07:14.414 "block_size": 512, 00:07:14.414 "num_blocks": 1048576, 00:07:14.414 "uuid": "b5cd8648-e08f-4b5e-b3e3-463cc8edb88c", 00:07:14.414 "assigned_rate_limits": { 00:07:14.414 "rw_ios_per_sec": 0, 00:07:14.414 "rw_mbytes_per_sec": 0, 00:07:14.414 "r_mbytes_per_sec": 0, 00:07:14.414 "w_mbytes_per_sec": 0 00:07:14.414 }, 00:07:14.414 "claimed": true, 00:07:14.414 "claim_type": "exclusive_write", 00:07:14.414 "zoned": false, 00:07:14.414 "supported_io_types": { 00:07:14.414 "read": true, 00:07:14.414 "write": true, 00:07:14.414 "unmap": true, 00:07:14.414 "write_zeroes": true, 00:07:14.414 "flush": true, 00:07:14.414 "reset": true, 00:07:14.414 "compare": false, 00:07:14.414 "compare_and_write": false, 00:07:14.414 "abort": true, 00:07:14.414 "nvme_admin": false, 00:07:14.414 "nvme_io": false 00:07:14.414 }, 00:07:14.414 "memory_domains": [ 00:07:14.414 { 00:07:14.414 "dma_device_id": "system", 00:07:14.414 "dma_device_type": 1 00:07:14.414 }, 00:07:14.414 { 00:07:14.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.414 "dma_device_type": 2 00:07:14.414 } 00:07:14.414 ], 00:07:14.414 "driver_specific": {} 00:07:14.414 } 00:07:14.414 ]' 00:07:14.414 23:51:44 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:14.414 23:51:44 -- common/autotest_common.sh@1369 -- # bs=512 00:07:14.414 23:51:44 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:14.414 23:51:44 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:14.414 23:51:44 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:14.414 23:51:44 -- common/autotest_common.sh@1374 -- # echo 512 00:07:14.414 23:51:44 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:14.414 23:51:44 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:16.326 23:51:46 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:16.326 23:51:46 -- common/autotest_common.sh@1184 -- # local i=0 00:07:16.326 23:51:46 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:16.326 23:51:46 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:16.326 23:51:46 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:18.284 23:51:48 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:18.284 23:51:48 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:18.284 23:51:48 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.284 23:51:48 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:18.284 23:51:48 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.284 23:51:48 -- common/autotest_common.sh@1194 -- # return 0 00:07:18.284 23:51:48 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:18.284 23:51:48 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:18.284 23:51:48 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:18.284 23:51:48 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:18.284 23:51:48 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:18.284 23:51:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:18.284 23:51:48 -- setup/common.sh@80 -- # echo 536870912 00:07:18.284 23:51:48 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:18.284 23:51:48 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:18.284 23:51:48 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:18.284 23:51:48 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:18.284 23:51:48 -- target/filesystem.sh@69 -- # partprobe 00:07:18.855 23:51:48 -- target/filesystem.sh@70 -- # sleep 1 00:07:19.797 23:51:49 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:19.797 23:51:49 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:19.797 23:51:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:19.797 23:51:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.797 23:51:49 -- common/autotest_common.sh@10 -- # set +x 00:07:20.058 ************************************ 00:07:20.058 START TEST filesystem_ext4 00:07:20.058 ************************************ 00:07:20.058 23:51:50 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:20.058 23:51:50 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:20.058 23:51:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.058 23:51:50 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:20.058 23:51:50 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:20.058 23:51:50 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:20.058 23:51:50 -- common/autotest_common.sh@914 -- # local i=0 00:07:20.058 23:51:50 -- common/autotest_common.sh@915 -- # local force 00:07:20.058 23:51:50 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:20.058 23:51:50 -- common/autotest_common.sh@918 -- # force=-F 00:07:20.058 23:51:50 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:20.058 mke2fs 1.46.5 (30-Dec-2021) 00:07:20.058 Discarding device blocks: 0/522240 done 00:07:20.058 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:20.058 Filesystem UUID: f35c5ef2-9983-4b9a-91fb-546352a800ee 00:07:20.058 Superblock backups stored on blocks: 00:07:20.058 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:20.058 00:07:20.058 Allocating group tables: 0/64 done 00:07:20.058 Writing inode tables: 0/64 done 00:07:23.360 Creating journal (8192 blocks): done 00:07:23.882 Writing superblocks and filesystem accounting information: 0/64 done 00:07:23.882 00:07:23.882 23:51:53 -- common/autotest_common.sh@931 -- # return 0 00:07:23.882 23:51:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:24.150 23:51:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:24.150 23:51:54 -- target/filesystem.sh@25 -- # sync 00:07:24.150 23:51:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:24.150 23:51:54 -- target/filesystem.sh@27 -- # sync 00:07:24.150 23:51:54 -- target/filesystem.sh@29 -- # i=0 00:07:24.150 23:51:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:24.150 23:51:54 -- target/filesystem.sh@37 -- # kill -0 217469 00:07:24.150 23:51:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:24.150 23:51:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:24.150 23:51:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:24.150 23:51:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:24.150 00:07:24.150 real 0m4.174s 00:07:24.150 user 0m0.028s 00:07:24.150 sys 0m0.074s 00:07:24.150 23:51:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.150 23:51:54 -- common/autotest_common.sh@10 -- # set +x 00:07:24.150 ************************************ 00:07:24.150 END TEST filesystem_ext4 00:07:24.150 ************************************ 00:07:24.150 23:51:54 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:24.150 23:51:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:24.150 23:51:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.150 23:51:54 -- common/autotest_common.sh@10 -- # set +x 00:07:24.414 ************************************ 00:07:24.414 START TEST filesystem_btrfs 00:07:24.414 ************************************ 00:07:24.414 23:51:54 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:24.414 23:51:54 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:24.414 23:51:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:24.414 23:51:54 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:24.414 23:51:54 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:24.414 23:51:54 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:24.414 23:51:54 -- common/autotest_common.sh@914 -- # local i=0 00:07:24.414 23:51:54 -- common/autotest_common.sh@915 -- # local force 00:07:24.414 23:51:54 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:24.414 23:51:54 -- common/autotest_common.sh@920 -- # force=-f 00:07:24.414 23:51:54 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:24.676 btrfs-progs v6.6.2 00:07:24.676 See https://btrfs.readthedocs.io for more information. 00:07:24.676 00:07:24.676 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:24.676 NOTE: several default settings have changed in version 5.15, please make sure 00:07:24.676 this does not affect your deployments: 00:07:24.676 - DUP for metadata (-m dup) 00:07:24.676 - enabled no-holes (-O no-holes) 00:07:24.676 - enabled free-space-tree (-R free-space-tree) 00:07:24.676 00:07:24.676 Label: (null) 00:07:24.676 UUID: 47ee02ef-bd00-4944-b55f-2eebe94c4855 00:07:24.676 Node size: 16384 00:07:24.676 Sector size: 4096 00:07:24.676 Filesystem size: 510.00MiB 00:07:24.676 Block group profiles: 00:07:24.676 Data: single 8.00MiB 00:07:24.676 Metadata: DUP 32.00MiB 00:07:24.676 System: DUP 8.00MiB 00:07:24.676 SSD detected: yes 00:07:24.676 Zoned device: no 00:07:24.676 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:24.676 Runtime features: free-space-tree 00:07:24.676 Checksum: crc32c 00:07:24.676 Number of devices: 1 00:07:24.676 Devices: 00:07:24.676 ID SIZE PATH 00:07:24.676 1 510.00MiB /dev/nvme0n1p1 00:07:24.676 00:07:24.676 23:51:54 -- common/autotest_common.sh@931 -- # return 0 00:07:24.676 23:51:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.633 23:51:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.633 23:51:55 -- target/filesystem.sh@25 -- # sync 00:07:25.633 23:51:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.633 23:51:55 -- target/filesystem.sh@27 -- # sync 00:07:25.633 23:51:55 -- target/filesystem.sh@29 -- # i=0 00:07:25.633 23:51:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.633 23:51:55 -- target/filesystem.sh@37 -- # kill -0 217469 00:07:25.633 23:51:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.633 23:51:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.633 23:51:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.633 23:51:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.633 00:07:25.633 real 0m1.204s 00:07:25.633 user 0m0.033s 00:07:25.633 sys 0m0.125s 00:07:25.633 23:51:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:25.633 23:51:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.633 ************************************ 00:07:25.633 END TEST filesystem_btrfs 00:07:25.633 ************************************ 00:07:25.633 23:51:55 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:25.633 23:51:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:25.633 23:51:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.633 23:51:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.895 ************************************ 00:07:25.895 START TEST filesystem_xfs 00:07:25.895 ************************************ 00:07:25.895 23:51:55 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:25.895 23:51:55 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:25.895 23:51:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.895 23:51:55 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:25.895 23:51:55 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:25.895 23:51:55 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:25.895 23:51:55 -- common/autotest_common.sh@914 -- # local i=0 00:07:25.895 23:51:55 -- common/autotest_common.sh@915 -- # local force 00:07:25.895 23:51:55 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:25.895 23:51:55 -- common/autotest_common.sh@920 -- # force=-f 00:07:25.895 23:51:55 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:25.895 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:25.895 = sectsz=512 attr=2, projid32bit=1 00:07:25.895 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:25.895 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:25.895 data = bsize=4096 blocks=130560, imaxpct=25 00:07:25.895 = sunit=0 swidth=0 blks 00:07:25.895 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:25.895 log =internal log bsize=4096 blocks=16384, version=2 00:07:25.895 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:25.895 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:26.838 Discarding blocks...Done. 00:07:26.838 23:51:56 -- common/autotest_common.sh@931 -- # return 0 00:07:26.838 23:51:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.749 23:51:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.749 23:51:58 -- target/filesystem.sh@25 -- # sync 00:07:28.749 23:51:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.749 23:51:58 -- target/filesystem.sh@27 -- # sync 00:07:28.749 23:51:58 -- target/filesystem.sh@29 -- # i=0 00:07:28.749 23:51:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.749 23:51:58 -- target/filesystem.sh@37 -- # kill -0 217469 00:07:28.749 23:51:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.749 23:51:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.749 23:51:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.749 23:51:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.749 00:07:28.749 real 0m3.007s 00:07:28.749 user 0m0.032s 00:07:28.749 sys 0m0.071s 00:07:28.749 23:51:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.749 23:51:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.749 ************************************ 00:07:28.749 END TEST filesystem_xfs 00:07:28.749 ************************************ 00:07:28.749 23:51:58 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:29.319 23:51:59 -- target/filesystem.sh@93 -- # sync 00:07:29.319 23:51:59 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.319 23:51:59 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.319 23:51:59 -- common/autotest_common.sh@1205 -- # local i=0 00:07:29.319 23:51:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:29.319 23:51:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.319 23:51:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:29.319 23:51:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.319 23:51:59 -- common/autotest_common.sh@1217 -- # return 0 00:07:29.319 23:51:59 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.319 23:51:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.319 23:51:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.319 23:51:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.319 23:51:59 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:29.319 23:51:59 -- target/filesystem.sh@101 -- # killprocess 217469 00:07:29.319 23:51:59 -- common/autotest_common.sh@936 -- # '[' -z 217469 ']' 00:07:29.319 23:51:59 -- common/autotest_common.sh@940 -- # kill -0 217469 00:07:29.319 23:51:59 -- common/autotest_common.sh@941 -- # uname 00:07:29.319 23:51:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:29.319 23:51:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 217469 00:07:29.319 23:51:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:29.320 23:51:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:29.320 23:51:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 217469' 00:07:29.320 killing process with pid 217469 00:07:29.320 23:51:59 -- common/autotest_common.sh@955 -- # kill 217469 00:07:29.320 23:51:59 -- common/autotest_common.sh@960 -- # wait 217469 00:07:29.581 23:51:59 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:29.582 00:07:29.582 real 0m16.201s 00:07:29.582 user 1m4.053s 00:07:29.582 sys 0m1.431s 00:07:29.582 23:51:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:29.582 23:51:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.582 ************************************ 00:07:29.582 END TEST nvmf_filesystem_no_in_capsule 00:07:29.582 ************************************ 00:07:29.582 23:51:59 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:29.582 23:51:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:29.582 23:51:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.582 23:51:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.843 ************************************ 00:07:29.843 START TEST nvmf_filesystem_in_capsule 00:07:29.843 ************************************ 00:07:29.843 23:51:59 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:29.843 23:51:59 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:29.843 23:51:59 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:29.843 23:51:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:29.843 23:51:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:29.843 23:51:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.843 23:51:59 -- nvmf/common.sh@470 -- # nvmfpid=221067 00:07:29.843 23:51:59 -- nvmf/common.sh@471 -- # waitforlisten 221067 00:07:29.843 23:51:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:29.843 23:51:59 -- common/autotest_common.sh@817 -- # '[' -z 221067 ']' 00:07:29.843 23:51:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.843 23:51:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:29.843 23:51:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.843 23:51:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:29.843 23:51:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.843 [2024-04-26 23:51:59.865698] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:07:29.843 [2024-04-26 23:51:59.865743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.843 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.843 [2024-04-26 23:51:59.934778] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.843 [2024-04-26 23:51:59.998815] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.843 [2024-04-26 23:51:59.998856] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.843 [2024-04-26 23:51:59.998864] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.843 [2024-04-26 23:51:59.998874] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.843 [2024-04-26 23:51:59.998879] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.843 [2024-04-26 23:51:59.999007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.843 [2024-04-26 23:51:59.999398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.843 [2024-04-26 23:51:59.999555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.843 [2024-04-26 23:51:59.999555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.414 23:52:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:30.414 23:52:00 -- common/autotest_common.sh@850 -- # return 0 00:07:30.414 23:52:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:30.414 23:52:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:30.414 23:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.676 23:52:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.676 23:52:00 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:30.676 23:52:00 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:30.676 23:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:30.676 23:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.676 [2024-04-26 23:52:00.668404] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.676 23:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:30.676 23:52:00 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:30.676 23:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:30.676 23:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.676 Malloc1 00:07:30.676 23:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:30.676 23:52:00 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:30.676 23:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:30.676 23:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.676 23:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:30.676 23:52:00 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:30.676 23:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:30.676 23:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.676 23:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:30.676 23:52:00 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.676 23:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:30.676 23:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.676 [2024-04-26 23:52:00.799904] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.676 23:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:30.676 23:52:00 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:30.676 23:52:00 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:30.676 23:52:00 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:30.676 23:52:00 -- common/autotest_common.sh@1366 -- # local bs 00:07:30.676 23:52:00 -- common/autotest_common.sh@1367 -- # local nb 00:07:30.676 23:52:00 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:30.676 23:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:30.676 23:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:30.676 23:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:30.676 23:52:00 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:30.676 { 00:07:30.676 "name": "Malloc1", 00:07:30.676 "aliases": [ 00:07:30.676 "0bedb473-5a01-46f3-9709-06b43458501d" 00:07:30.676 ], 00:07:30.676 "product_name": "Malloc disk", 00:07:30.676 "block_size": 512, 00:07:30.676 "num_blocks": 1048576, 00:07:30.676 "uuid": "0bedb473-5a01-46f3-9709-06b43458501d", 00:07:30.676 "assigned_rate_limits": { 00:07:30.676 "rw_ios_per_sec": 0, 00:07:30.676 "rw_mbytes_per_sec": 0, 00:07:30.676 "r_mbytes_per_sec": 0, 00:07:30.676 "w_mbytes_per_sec": 0 00:07:30.676 }, 00:07:30.676 "claimed": true, 00:07:30.676 "claim_type": "exclusive_write", 00:07:30.676 "zoned": false, 00:07:30.676 "supported_io_types": { 00:07:30.676 "read": true, 00:07:30.676 "write": true, 00:07:30.676 "unmap": true, 00:07:30.676 "write_zeroes": true, 00:07:30.676 "flush": true, 00:07:30.676 "reset": true, 00:07:30.676 "compare": false, 00:07:30.676 "compare_and_write": false, 00:07:30.676 "abort": true, 00:07:30.676 "nvme_admin": false, 00:07:30.676 "nvme_io": false 00:07:30.676 }, 00:07:30.676 "memory_domains": [ 00:07:30.676 { 00:07:30.676 "dma_device_id": "system", 00:07:30.676 "dma_device_type": 1 00:07:30.676 }, 00:07:30.676 { 00:07:30.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.676 "dma_device_type": 2 00:07:30.676 } 00:07:30.676 ], 00:07:30.676 "driver_specific": {} 00:07:30.676 } 00:07:30.676 ]' 00:07:30.676 23:52:00 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:30.676 23:52:00 -- common/autotest_common.sh@1369 -- # bs=512 00:07:30.676 23:52:00 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:30.937 23:52:00 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:30.937 23:52:00 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:30.937 23:52:00 -- common/autotest_common.sh@1374 -- # echo 512 00:07:30.937 23:52:00 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:30.937 23:52:00 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.319 23:52:02 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.319 23:52:02 -- common/autotest_common.sh@1184 -- # local i=0 00:07:32.319 23:52:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.319 23:52:02 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:32.319 23:52:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:34.863 23:52:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:34.863 23:52:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:34.863 23:52:04 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.863 23:52:04 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:34.863 23:52:04 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.863 23:52:04 -- common/autotest_common.sh@1194 -- # return 0 00:07:34.863 23:52:04 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:34.863 23:52:04 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:34.863 23:52:04 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:34.863 23:52:04 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:34.863 23:52:04 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:34.863 23:52:04 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:34.863 23:52:04 -- setup/common.sh@80 -- # echo 536870912 00:07:34.863 23:52:04 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:34.863 23:52:04 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:34.863 23:52:04 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:34.864 23:52:04 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:34.864 23:52:04 -- target/filesystem.sh@69 -- # partprobe 00:07:34.864 23:52:04 -- target/filesystem.sh@70 -- # sleep 1 00:07:35.808 23:52:05 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:35.808 23:52:05 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:35.808 23:52:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:35.808 23:52:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.808 23:52:05 -- common/autotest_common.sh@10 -- # set +x 00:07:36.068 ************************************ 00:07:36.068 START TEST filesystem_in_capsule_ext4 00:07:36.068 ************************************ 00:07:36.068 23:52:06 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:36.068 23:52:06 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:36.068 23:52:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:36.068 23:52:06 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:36.068 23:52:06 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:36.068 23:52:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:36.068 23:52:06 -- common/autotest_common.sh@914 -- # local i=0 00:07:36.068 23:52:06 -- common/autotest_common.sh@915 -- # local force 00:07:36.068 23:52:06 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:36.068 23:52:06 -- common/autotest_common.sh@918 -- # force=-F 00:07:36.068 23:52:06 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:36.068 mke2fs 1.46.5 (30-Dec-2021) 00:07:36.068 Discarding device blocks: 0/522240 done 00:07:36.068 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:36.068 Filesystem UUID: 54c7ce80-bd08-4a5e-8bff-bb63c98d0fd1 00:07:36.068 Superblock backups stored on blocks: 00:07:36.068 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:36.068 00:07:36.068 Allocating group tables: 0/64 done 00:07:36.068 Writing inode tables: 0/64 done 00:07:36.068 Creating journal (8192 blocks): done 00:07:36.588 Writing superblocks and filesystem accounting information: 0/64 done 00:07:36.588 00:07:36.588 23:52:06 -- common/autotest_common.sh@931 -- # return 0 00:07:36.588 23:52:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.528 23:52:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.528 23:52:07 -- target/filesystem.sh@25 -- # sync 00:07:37.528 23:52:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.528 23:52:07 -- target/filesystem.sh@27 -- # sync 00:07:37.528 23:52:07 -- target/filesystem.sh@29 -- # i=0 00:07:37.528 23:52:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.528 23:52:07 -- target/filesystem.sh@37 -- # kill -0 221067 00:07:37.528 23:52:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.528 23:52:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.528 23:52:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.528 23:52:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.528 00:07:37.528 real 0m1.540s 00:07:37.528 user 0m0.025s 00:07:37.528 sys 0m0.071s 00:07:37.528 23:52:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:37.528 23:52:07 -- common/autotest_common.sh@10 -- # set +x 00:07:37.528 ************************************ 00:07:37.528 END TEST filesystem_in_capsule_ext4 00:07:37.528 ************************************ 00:07:37.528 23:52:07 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:37.528 23:52:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:37.528 23:52:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.528 23:52:07 -- common/autotest_common.sh@10 -- # set +x 00:07:37.789 ************************************ 00:07:37.789 START TEST filesystem_in_capsule_btrfs 00:07:37.789 ************************************ 00:07:37.789 23:52:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:37.789 23:52:07 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:37.789 23:52:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.789 23:52:07 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:37.789 23:52:07 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:37.789 23:52:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:37.789 23:52:07 -- common/autotest_common.sh@914 -- # local i=0 00:07:37.789 23:52:07 -- common/autotest_common.sh@915 -- # local force 00:07:37.789 23:52:07 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:37.789 23:52:07 -- common/autotest_common.sh@920 -- # force=-f 00:07:37.789 23:52:07 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:38.050 btrfs-progs v6.6.2 00:07:38.050 See https://btrfs.readthedocs.io for more information. 00:07:38.050 00:07:38.050 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:38.050 NOTE: several default settings have changed in version 5.15, please make sure 00:07:38.050 this does not affect your deployments: 00:07:38.050 - DUP for metadata (-m dup) 00:07:38.050 - enabled no-holes (-O no-holes) 00:07:38.050 - enabled free-space-tree (-R free-space-tree) 00:07:38.050 00:07:38.050 Label: (null) 00:07:38.050 UUID: e8d0896d-4949-4efb-8d6f-8033f032736b 00:07:38.050 Node size: 16384 00:07:38.050 Sector size: 4096 00:07:38.050 Filesystem size: 510.00MiB 00:07:38.050 Block group profiles: 00:07:38.050 Data: single 8.00MiB 00:07:38.050 Metadata: DUP 32.00MiB 00:07:38.050 System: DUP 8.00MiB 00:07:38.050 SSD detected: yes 00:07:38.050 Zoned device: no 00:07:38.050 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:38.050 Runtime features: free-space-tree 00:07:38.050 Checksum: crc32c 00:07:38.050 Number of devices: 1 00:07:38.050 Devices: 00:07:38.050 ID SIZE PATH 00:07:38.050 1 510.00MiB /dev/nvme0n1p1 00:07:38.050 00:07:38.050 23:52:08 -- common/autotest_common.sh@931 -- # return 0 00:07:38.050 23:52:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.993 23:52:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.993 23:52:09 -- target/filesystem.sh@25 -- # sync 00:07:38.993 23:52:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.993 23:52:09 -- target/filesystem.sh@27 -- # sync 00:07:38.993 23:52:09 -- target/filesystem.sh@29 -- # i=0 00:07:38.993 23:52:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.993 23:52:09 -- target/filesystem.sh@37 -- # kill -0 221067 00:07:38.993 23:52:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.993 23:52:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.993 23:52:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.993 23:52:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.993 00:07:38.993 real 0m1.291s 00:07:38.993 user 0m0.030s 00:07:38.993 sys 0m0.132s 00:07:38.993 23:52:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.993 23:52:09 -- common/autotest_common.sh@10 -- # set +x 00:07:38.993 ************************************ 00:07:38.993 END TEST filesystem_in_capsule_btrfs 00:07:38.993 ************************************ 00:07:38.993 23:52:09 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:38.993 23:52:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:38.993 23:52:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.993 23:52:09 -- common/autotest_common.sh@10 -- # set +x 00:07:39.255 ************************************ 00:07:39.255 START TEST filesystem_in_capsule_xfs 00:07:39.255 ************************************ 00:07:39.255 23:52:09 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:39.255 23:52:09 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:39.255 23:52:09 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.255 23:52:09 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:39.255 23:52:09 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:39.255 23:52:09 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:39.255 23:52:09 -- common/autotest_common.sh@914 -- # local i=0 00:07:39.255 23:52:09 -- common/autotest_common.sh@915 -- # local force 00:07:39.255 23:52:09 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:39.255 23:52:09 -- common/autotest_common.sh@920 -- # force=-f 00:07:39.255 23:52:09 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:39.255 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:39.255 = sectsz=512 attr=2, projid32bit=1 00:07:39.255 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:39.255 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:39.255 data = bsize=4096 blocks=130560, imaxpct=25 00:07:39.255 = sunit=0 swidth=0 blks 00:07:39.255 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:39.255 log =internal log bsize=4096 blocks=16384, version=2 00:07:39.255 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:39.255 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:40.197 Discarding blocks...Done. 00:07:40.197 23:52:10 -- common/autotest_common.sh@931 -- # return 0 00:07:40.197 23:52:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.110 23:52:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.110 23:52:11 -- target/filesystem.sh@25 -- # sync 00:07:42.110 23:52:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.110 23:52:11 -- target/filesystem.sh@27 -- # sync 00:07:42.110 23:52:11 -- target/filesystem.sh@29 -- # i=0 00:07:42.110 23:52:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.110 23:52:12 -- target/filesystem.sh@37 -- # kill -0 221067 00:07:42.110 23:52:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.110 23:52:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.110 23:52:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.110 23:52:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.110 00:07:42.110 real 0m2.771s 00:07:42.110 user 0m0.022s 00:07:42.110 sys 0m0.080s 00:07:42.110 23:52:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:42.110 23:52:12 -- common/autotest_common.sh@10 -- # set +x 00:07:42.110 ************************************ 00:07:42.110 END TEST filesystem_in_capsule_xfs 00:07:42.110 ************************************ 00:07:42.110 23:52:12 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:42.110 23:52:12 -- target/filesystem.sh@93 -- # sync 00:07:42.110 23:52:12 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.370 23:52:12 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.370 23:52:12 -- common/autotest_common.sh@1205 -- # local i=0 00:07:42.370 23:52:12 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:42.370 23:52:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.370 23:52:12 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:42.370 23:52:12 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.370 23:52:12 -- common/autotest_common.sh@1217 -- # return 0 00:07:42.370 23:52:12 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.370 23:52:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.370 23:52:12 -- common/autotest_common.sh@10 -- # set +x 00:07:42.370 23:52:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.370 23:52:12 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:42.370 23:52:12 -- target/filesystem.sh@101 -- # killprocess 221067 00:07:42.370 23:52:12 -- common/autotest_common.sh@936 -- # '[' -z 221067 ']' 00:07:42.370 23:52:12 -- common/autotest_common.sh@940 -- # kill -0 221067 00:07:42.370 23:52:12 -- common/autotest_common.sh@941 -- # uname 00:07:42.370 23:52:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:42.370 23:52:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 221067 00:07:42.370 23:52:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:42.370 23:52:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:42.370 23:52:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 221067' 00:07:42.370 killing process with pid 221067 00:07:42.370 23:52:12 -- common/autotest_common.sh@955 -- # kill 221067 00:07:42.370 23:52:12 -- common/autotest_common.sh@960 -- # wait 221067 00:07:42.631 23:52:12 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:42.631 00:07:42.631 real 0m12.898s 00:07:42.631 user 0m50.965s 00:07:42.631 sys 0m1.356s 00:07:42.631 23:52:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:42.631 23:52:12 -- common/autotest_common.sh@10 -- # set +x 00:07:42.631 ************************************ 00:07:42.631 END TEST nvmf_filesystem_in_capsule 00:07:42.631 ************************************ 00:07:42.631 23:52:12 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:42.631 23:52:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:42.631 23:52:12 -- nvmf/common.sh@117 -- # sync 00:07:42.631 23:52:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.631 23:52:12 -- nvmf/common.sh@120 -- # set +e 00:07:42.631 23:52:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.631 23:52:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.631 rmmod nvme_tcp 00:07:42.631 rmmod nvme_fabrics 00:07:42.631 rmmod nvme_keyring 00:07:42.631 23:52:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.631 23:52:12 -- nvmf/common.sh@124 -- # set -e 00:07:42.631 23:52:12 -- nvmf/common.sh@125 -- # return 0 00:07:42.631 23:52:12 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:42.631 23:52:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:42.631 23:52:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:42.631 23:52:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:42.631 23:52:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.631 23:52:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.631 23:52:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.631 23:52:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.631 23:52:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.174 23:52:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.174 00:07:45.174 real 0m39.277s 00:07:45.174 user 1m57.392s 00:07:45.174 sys 0m8.461s 00:07:45.174 23:52:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.174 23:52:14 -- common/autotest_common.sh@10 -- # set +x 00:07:45.174 ************************************ 00:07:45.174 END TEST nvmf_filesystem 00:07:45.174 ************************************ 00:07:45.174 23:52:14 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:45.174 23:52:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:45.174 23:52:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.174 23:52:14 -- common/autotest_common.sh@10 -- # set +x 00:07:45.174 ************************************ 00:07:45.174 START TEST nvmf_discovery 00:07:45.174 ************************************ 00:07:45.174 23:52:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:45.174 * Looking for test storage... 00:07:45.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.174 23:52:15 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.174 23:52:15 -- nvmf/common.sh@7 -- # uname -s 00:07:45.174 23:52:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.174 23:52:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.175 23:52:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.175 23:52:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.175 23:52:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.175 23:52:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.175 23:52:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.175 23:52:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.175 23:52:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.175 23:52:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.175 23:52:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:45.175 23:52:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:45.175 23:52:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.175 23:52:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.175 23:52:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.175 23:52:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.175 23:52:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.175 23:52:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.175 23:52:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.175 23:52:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.175 23:52:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.175 23:52:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.175 23:52:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.175 23:52:15 -- paths/export.sh@5 -- # export PATH 00:07:45.175 23:52:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.175 23:52:15 -- nvmf/common.sh@47 -- # : 0 00:07:45.175 23:52:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.175 23:52:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.175 23:52:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.175 23:52:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.175 23:52:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.175 23:52:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.175 23:52:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.175 23:52:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.175 23:52:15 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:45.175 23:52:15 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:45.175 23:52:15 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:45.175 23:52:15 -- target/discovery.sh@15 -- # hash nvme 00:07:45.175 23:52:15 -- target/discovery.sh@20 -- # nvmftestinit 00:07:45.175 23:52:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:45.175 23:52:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.175 23:52:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:45.175 23:52:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:45.175 23:52:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:45.175 23:52:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.175 23:52:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.175 23:52:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.175 23:52:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:45.175 23:52:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:45.175 23:52:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.175 23:52:15 -- common/autotest_common.sh@10 -- # set +x 00:07:51.850 23:52:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:51.850 23:52:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.850 23:52:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.850 23:52:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.850 23:52:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.850 23:52:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.850 23:52:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.850 23:52:22 -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.850 23:52:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.850 23:52:22 -- nvmf/common.sh@296 -- # e810=() 00:07:51.850 23:52:22 -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.850 23:52:22 -- nvmf/common.sh@297 -- # x722=() 00:07:51.850 23:52:22 -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.850 23:52:22 -- nvmf/common.sh@298 -- # mlx=() 00:07:51.850 23:52:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.850 23:52:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.850 23:52:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.850 23:52:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.850 23:52:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.850 23:52:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.850 23:52:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.850 23:52:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.850 23:52:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.850 23:52:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.851 23:52:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.851 23:52:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.851 23:52:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.851 23:52:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.851 23:52:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.851 23:52:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.851 23:52:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:51.851 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:51.851 23:52:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.851 23:52:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:51.851 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:51.851 23:52:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.851 23:52:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.851 23:52:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.851 23:52:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:51.851 23:52:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.851 23:52:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:51.851 Found net devices under 0000:31:00.0: cvl_0_0 00:07:51.851 23:52:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.851 23:52:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.851 23:52:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.851 23:52:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:51.851 23:52:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.851 23:52:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:51.851 Found net devices under 0000:31:00.1: cvl_0_1 00:07:51.851 23:52:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.851 23:52:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:51.851 23:52:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:51.851 23:52:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:51.851 23:52:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:51.851 23:52:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.851 23:52:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.851 23:52:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.851 23:52:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.851 23:52:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.851 23:52:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.851 23:52:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.851 23:52:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.851 23:52:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.851 23:52:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.851 23:52:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.851 23:52:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.851 23:52:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.112 23:52:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.112 23:52:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.112 23:52:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:52.112 23:52:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.112 23:52:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.112 23:52:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.112 23:52:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:07:52.112 00:07:52.113 --- 10.0.0.2 ping statistics --- 00:07:52.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.113 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:07:52.373 23:52:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:07:52.373 00:07:52.373 --- 10.0.0.1 ping statistics --- 00:07:52.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.373 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:07:52.373 23:52:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.373 23:52:22 -- nvmf/common.sh@411 -- # return 0 00:07:52.373 23:52:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:52.373 23:52:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.373 23:52:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:52.373 23:52:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:52.373 23:52:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.373 23:52:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:52.373 23:52:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:52.373 23:52:22 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:52.373 23:52:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:52.373 23:52:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:52.373 23:52:22 -- common/autotest_common.sh@10 -- # set +x 00:07:52.373 23:52:22 -- nvmf/common.sh@470 -- # nvmfpid=228076 00:07:52.373 23:52:22 -- nvmf/common.sh@471 -- # waitforlisten 228076 00:07:52.373 23:52:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.373 23:52:22 -- common/autotest_common.sh@817 -- # '[' -z 228076 ']' 00:07:52.373 23:52:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.374 23:52:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:52.374 23:52:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.374 23:52:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:52.374 23:52:22 -- common/autotest_common.sh@10 -- # set +x 00:07:52.374 [2024-04-26 23:52:22.446999] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:07:52.374 [2024-04-26 23:52:22.447061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.374 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.374 [2024-04-26 23:52:22.518056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.374 [2024-04-26 23:52:22.592477] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.374 [2024-04-26 23:52:22.592518] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.374 [2024-04-26 23:52:22.592525] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.374 [2024-04-26 23:52:22.592532] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.374 [2024-04-26 23:52:22.592538] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.374 [2024-04-26 23:52:22.592643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.374 [2024-04-26 23:52:22.592760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.374 [2024-04-26 23:52:22.592898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.374 [2024-04-26 23:52:22.592898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.329 23:52:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:53.329 23:52:23 -- common/autotest_common.sh@850 -- # return 0 00:07:53.329 23:52:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:53.329 23:52:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.329 23:52:23 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 [2024-04-26 23:52:23.274435] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@26 -- # seq 1 4 00:07:53.329 23:52:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.329 23:52:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 Null1 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 [2024-04-26 23:52:23.334722] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.329 23:52:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 Null2 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.329 23:52:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 Null3 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.329 23:52:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 Null4 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:53.329 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.329 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.329 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.329 23:52:23 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:07:53.590 00:07:53.590 Discovery Log Number of Records 6, Generation counter 6 00:07:53.590 =====Discovery Log Entry 0====== 00:07:53.590 trtype: tcp 00:07:53.590 adrfam: ipv4 00:07:53.590 subtype: current discovery subsystem 00:07:53.590 treq: not required 00:07:53.590 portid: 0 00:07:53.590 trsvcid: 4420 00:07:53.590 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:53.590 traddr: 10.0.0.2 00:07:53.590 eflags: explicit discovery connections, duplicate discovery information 00:07:53.590 sectype: none 00:07:53.590 =====Discovery Log Entry 1====== 00:07:53.590 trtype: tcp 00:07:53.590 adrfam: ipv4 00:07:53.590 subtype: nvme subsystem 00:07:53.590 treq: not required 00:07:53.590 portid: 0 00:07:53.590 trsvcid: 4420 00:07:53.590 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:53.590 traddr: 10.0.0.2 00:07:53.590 eflags: none 00:07:53.590 sectype: none 00:07:53.591 =====Discovery Log Entry 2====== 00:07:53.591 trtype: tcp 00:07:53.591 adrfam: ipv4 00:07:53.591 subtype: nvme subsystem 00:07:53.591 treq: not required 00:07:53.591 portid: 0 00:07:53.591 trsvcid: 4420 00:07:53.591 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:53.591 traddr: 10.0.0.2 00:07:53.591 eflags: none 00:07:53.591 sectype: none 00:07:53.591 =====Discovery Log Entry 3====== 00:07:53.591 trtype: tcp 00:07:53.591 adrfam: ipv4 00:07:53.591 subtype: nvme subsystem 00:07:53.591 treq: not required 00:07:53.591 portid: 0 00:07:53.591 trsvcid: 4420 00:07:53.591 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:53.591 traddr: 10.0.0.2 00:07:53.591 eflags: none 00:07:53.591 sectype: none 00:07:53.591 =====Discovery Log Entry 4====== 00:07:53.591 trtype: tcp 00:07:53.591 adrfam: ipv4 00:07:53.591 subtype: nvme subsystem 00:07:53.591 treq: not required 00:07:53.591 portid: 0 00:07:53.591 trsvcid: 4420 00:07:53.591 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:53.591 traddr: 10.0.0.2 00:07:53.591 eflags: none 00:07:53.591 sectype: none 00:07:53.591 =====Discovery Log Entry 5====== 00:07:53.591 trtype: tcp 00:07:53.591 adrfam: ipv4 00:07:53.591 subtype: discovery subsystem referral 00:07:53.591 treq: not required 00:07:53.591 portid: 0 00:07:53.591 trsvcid: 4430 00:07:53.591 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:53.591 traddr: 10.0.0.2 00:07:53.591 eflags: none 00:07:53.591 sectype: none 00:07:53.591 23:52:23 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:53.591 Perform nvmf subsystem discovery via RPC 00:07:53.591 23:52:23 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:53.591 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.591 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.591 [2024-04-26 23:52:23.591417] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:53.591 [ 00:07:53.591 { 00:07:53.591 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:53.591 "subtype": "Discovery", 00:07:53.591 "listen_addresses": [ 00:07:53.591 { 00:07:53.591 "transport": "TCP", 00:07:53.591 "trtype": "TCP", 00:07:53.591 "adrfam": "IPv4", 00:07:53.591 "traddr": "10.0.0.2", 00:07:53.591 "trsvcid": "4420" 00:07:53.591 } 00:07:53.591 ], 00:07:53.591 "allow_any_host": true, 00:07:53.591 "hosts": [] 00:07:53.591 }, 00:07:53.591 { 00:07:53.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.591 "subtype": "NVMe", 00:07:53.591 "listen_addresses": [ 00:07:53.591 { 00:07:53.591 "transport": "TCP", 00:07:53.591 "trtype": "TCP", 00:07:53.591 "adrfam": "IPv4", 00:07:53.591 "traddr": "10.0.0.2", 00:07:53.591 "trsvcid": "4420" 00:07:53.591 } 00:07:53.591 ], 00:07:53.591 "allow_any_host": true, 00:07:53.591 "hosts": [], 00:07:53.591 "serial_number": "SPDK00000000000001", 00:07:53.591 "model_number": "SPDK bdev Controller", 00:07:53.591 "max_namespaces": 32, 00:07:53.591 "min_cntlid": 1, 00:07:53.591 "max_cntlid": 65519, 00:07:53.591 "namespaces": [ 00:07:53.591 { 00:07:53.591 "nsid": 1, 00:07:53.591 "bdev_name": "Null1", 00:07:53.591 "name": "Null1", 00:07:53.591 "nguid": "268EF93044724AF696A05B97369A97F9", 00:07:53.591 "uuid": "268ef930-4472-4af6-96a0-5b97369a97f9" 00:07:53.591 } 00:07:53.591 ] 00:07:53.591 }, 00:07:53.591 { 00:07:53.591 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:53.591 "subtype": "NVMe", 00:07:53.591 "listen_addresses": [ 00:07:53.591 { 00:07:53.591 "transport": "TCP", 00:07:53.591 "trtype": "TCP", 00:07:53.591 "adrfam": "IPv4", 00:07:53.591 "traddr": "10.0.0.2", 00:07:53.591 "trsvcid": "4420" 00:07:53.591 } 00:07:53.591 ], 00:07:53.591 "allow_any_host": true, 00:07:53.591 "hosts": [], 00:07:53.591 "serial_number": "SPDK00000000000002", 00:07:53.591 "model_number": "SPDK bdev Controller", 00:07:53.591 "max_namespaces": 32, 00:07:53.591 "min_cntlid": 1, 00:07:53.591 "max_cntlid": 65519, 00:07:53.591 "namespaces": [ 00:07:53.591 { 00:07:53.591 "nsid": 1, 00:07:53.591 "bdev_name": "Null2", 00:07:53.591 "name": "Null2", 00:07:53.591 "nguid": "AA42153B67B4417ABD4D1ACA41DA84BA", 00:07:53.591 "uuid": "aa42153b-67b4-417a-bd4d-1aca41da84ba" 00:07:53.591 } 00:07:53.591 ] 00:07:53.591 }, 00:07:53.591 { 00:07:53.591 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:53.591 "subtype": "NVMe", 00:07:53.591 "listen_addresses": [ 00:07:53.591 { 00:07:53.591 "transport": "TCP", 00:07:53.591 "trtype": "TCP", 00:07:53.591 "adrfam": "IPv4", 00:07:53.591 "traddr": "10.0.0.2", 00:07:53.591 "trsvcid": "4420" 00:07:53.591 } 00:07:53.591 ], 00:07:53.591 "allow_any_host": true, 00:07:53.591 "hosts": [], 00:07:53.591 "serial_number": "SPDK00000000000003", 00:07:53.591 "model_number": "SPDK bdev Controller", 00:07:53.591 "max_namespaces": 32, 00:07:53.591 "min_cntlid": 1, 00:07:53.591 "max_cntlid": 65519, 00:07:53.591 "namespaces": [ 00:07:53.591 { 00:07:53.591 "nsid": 1, 00:07:53.591 "bdev_name": "Null3", 00:07:53.591 "name": "Null3", 00:07:53.591 "nguid": "26EBF38C9AE94682AFC05AD70BF1D0D3", 00:07:53.591 "uuid": "26ebf38c-9ae9-4682-afc0-5ad70bf1d0d3" 00:07:53.591 } 00:07:53.591 ] 00:07:53.591 }, 00:07:53.591 { 00:07:53.591 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:53.591 "subtype": "NVMe", 00:07:53.591 "listen_addresses": [ 00:07:53.591 { 00:07:53.591 "transport": "TCP", 00:07:53.591 "trtype": "TCP", 00:07:53.591 "adrfam": "IPv4", 00:07:53.591 "traddr": "10.0.0.2", 00:07:53.591 "trsvcid": "4420" 00:07:53.591 } 00:07:53.591 ], 00:07:53.591 "allow_any_host": true, 00:07:53.591 "hosts": [], 00:07:53.591 "serial_number": "SPDK00000000000004", 00:07:53.591 "model_number": "SPDK bdev Controller", 00:07:53.591 "max_namespaces": 32, 00:07:53.591 "min_cntlid": 1, 00:07:53.591 "max_cntlid": 65519, 00:07:53.591 "namespaces": [ 00:07:53.591 { 00:07:53.591 "nsid": 1, 00:07:53.591 "bdev_name": "Null4", 00:07:53.591 "name": "Null4", 00:07:53.591 "nguid": "74133EB892DC4A6BBBA05239F948902C", 00:07:53.591 "uuid": "74133eb8-92dc-4a6b-bba0-5239f948902c" 00:07:53.591 } 00:07:53.591 ] 00:07:53.591 } 00:07:53.591 ] 00:07:53.591 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.591 23:52:23 -- target/discovery.sh@42 -- # seq 1 4 00:07:53.591 23:52:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.591 23:52:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.591 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.591 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.591 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.591 23:52:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:53.591 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.591 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.591 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.591 23:52:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.591 23:52:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:53.591 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.591 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.591 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.591 23:52:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:53.591 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.591 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.591 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.591 23:52:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.591 23:52:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:53.591 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.591 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.591 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.591 23:52:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:53.591 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.591 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.591 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.591 23:52:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.591 23:52:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:53.591 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.591 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.591 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.591 23:52:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:53.591 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.591 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.591 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.591 23:52:23 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:53.591 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.591 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.591 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.591 23:52:23 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:53.591 23:52:23 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:53.591 23:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.591 23:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.591 23:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.591 23:52:23 -- target/discovery.sh@49 -- # check_bdevs= 00:07:53.591 23:52:23 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:53.591 23:52:23 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:53.591 23:52:23 -- target/discovery.sh@57 -- # nvmftestfini 00:07:53.591 23:52:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:53.591 23:52:23 -- nvmf/common.sh@117 -- # sync 00:07:53.592 23:52:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.592 23:52:23 -- nvmf/common.sh@120 -- # set +e 00:07:53.592 23:52:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.592 23:52:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.592 rmmod nvme_tcp 00:07:53.592 rmmod nvme_fabrics 00:07:53.592 rmmod nvme_keyring 00:07:53.592 23:52:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.852 23:52:23 -- nvmf/common.sh@124 -- # set -e 00:07:53.852 23:52:23 -- nvmf/common.sh@125 -- # return 0 00:07:53.852 23:52:23 -- nvmf/common.sh@478 -- # '[' -n 228076 ']' 00:07:53.852 23:52:23 -- nvmf/common.sh@479 -- # killprocess 228076 00:07:53.852 23:52:23 -- common/autotest_common.sh@936 -- # '[' -z 228076 ']' 00:07:53.852 23:52:23 -- common/autotest_common.sh@940 -- # kill -0 228076 00:07:53.852 23:52:23 -- common/autotest_common.sh@941 -- # uname 00:07:53.852 23:52:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.852 23:52:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 228076 00:07:53.852 23:52:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.852 23:52:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.852 23:52:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 228076' 00:07:53.852 killing process with pid 228076 00:07:53.852 23:52:23 -- common/autotest_common.sh@955 -- # kill 228076 00:07:53.852 [2024-04-26 23:52:23.869696] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:53.852 23:52:23 -- common/autotest_common.sh@960 -- # wait 228076 00:07:53.852 23:52:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:53.852 23:52:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:53.852 23:52:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:53.852 23:52:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.852 23:52:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.852 23:52:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.852 23:52:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.852 23:52:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.397 23:52:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.397 00:07:56.397 real 0m10.972s 00:07:56.397 user 0m7.846s 00:07:56.397 sys 0m5.632s 00:07:56.397 23:52:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.397 23:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:56.397 ************************************ 00:07:56.397 END TEST nvmf_discovery 00:07:56.397 ************************************ 00:07:56.397 23:52:26 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:56.397 23:52:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:56.397 23:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.397 23:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:56.397 ************************************ 00:07:56.397 START TEST nvmf_referrals 00:07:56.397 ************************************ 00:07:56.397 23:52:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:56.397 * Looking for test storage... 00:07:56.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.397 23:52:26 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.397 23:52:26 -- nvmf/common.sh@7 -- # uname -s 00:07:56.397 23:52:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.397 23:52:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.397 23:52:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.397 23:52:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.397 23:52:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.397 23:52:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.397 23:52:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.397 23:52:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.397 23:52:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.397 23:52:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.397 23:52:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:56.397 23:52:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:56.397 23:52:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.397 23:52:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.397 23:52:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.397 23:52:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.397 23:52:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.397 23:52:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.397 23:52:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.397 23:52:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.397 23:52:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.397 23:52:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.397 23:52:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.397 23:52:26 -- paths/export.sh@5 -- # export PATH 00:07:56.397 23:52:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.397 23:52:26 -- nvmf/common.sh@47 -- # : 0 00:07:56.397 23:52:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.397 23:52:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.397 23:52:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.397 23:52:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.397 23:52:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.397 23:52:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.397 23:52:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.397 23:52:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.397 23:52:26 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:56.397 23:52:26 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:56.397 23:52:26 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:56.397 23:52:26 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:56.397 23:52:26 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:56.397 23:52:26 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:56.397 23:52:26 -- target/referrals.sh@37 -- # nvmftestinit 00:07:56.397 23:52:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:56.397 23:52:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.397 23:52:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:56.397 23:52:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:56.397 23:52:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:56.397 23:52:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.397 23:52:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.397 23:52:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.397 23:52:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:56.397 23:52:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:56.397 23:52:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:56.397 23:52:26 -- common/autotest_common.sh@10 -- # set +x 00:08:04.534 23:52:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:04.535 23:52:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:04.535 23:52:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:04.535 23:52:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:04.535 23:52:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:04.535 23:52:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:04.535 23:52:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:04.535 23:52:33 -- nvmf/common.sh@295 -- # net_devs=() 00:08:04.535 23:52:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:04.535 23:52:33 -- nvmf/common.sh@296 -- # e810=() 00:08:04.535 23:52:33 -- nvmf/common.sh@296 -- # local -ga e810 00:08:04.535 23:52:33 -- nvmf/common.sh@297 -- # x722=() 00:08:04.535 23:52:33 -- nvmf/common.sh@297 -- # local -ga x722 00:08:04.535 23:52:33 -- nvmf/common.sh@298 -- # mlx=() 00:08:04.535 23:52:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:04.535 23:52:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.535 23:52:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.535 23:52:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.535 23:52:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.535 23:52:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.535 23:52:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.535 23:52:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.535 23:52:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.535 23:52:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.535 23:52:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.535 23:52:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.535 23:52:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:04.535 23:52:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:04.535 23:52:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:04.535 23:52:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.535 23:52:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:04.535 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:04.535 23:52:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.535 23:52:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:04.535 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:04.535 23:52:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:04.535 23:52:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.535 23:52:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.535 23:52:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:04.535 23:52:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.535 23:52:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:04.535 Found net devices under 0000:31:00.0: cvl_0_0 00:08:04.535 23:52:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.535 23:52:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.535 23:52:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.535 23:52:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:04.535 23:52:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.535 23:52:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:04.535 Found net devices under 0000:31:00.1: cvl_0_1 00:08:04.535 23:52:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.535 23:52:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:04.535 23:52:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:04.535 23:52:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:04.535 23:52:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.535 23:52:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.535 23:52:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.535 23:52:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:04.535 23:52:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.535 23:52:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.535 23:52:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:04.535 23:52:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.535 23:52:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.535 23:52:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:04.535 23:52:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:04.535 23:52:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.535 23:52:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.535 23:52:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.535 23:52:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.535 23:52:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:04.535 23:52:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.535 23:52:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.535 23:52:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.535 23:52:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:04.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:08:04.535 00:08:04.535 --- 10.0.0.2 ping statistics --- 00:08:04.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.535 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:08:04.535 23:52:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:08:04.535 00:08:04.535 --- 10.0.0.1 ping statistics --- 00:08:04.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.535 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:08:04.535 23:52:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.535 23:52:33 -- nvmf/common.sh@411 -- # return 0 00:08:04.535 23:52:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:04.535 23:52:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.535 23:52:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:04.535 23:52:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.535 23:52:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:04.535 23:52:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:04.535 23:52:33 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:04.535 23:52:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:04.535 23:52:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:04.535 23:52:33 -- common/autotest_common.sh@10 -- # set +x 00:08:04.535 23:52:33 -- nvmf/common.sh@470 -- # nvmfpid=232525 00:08:04.535 23:52:33 -- nvmf/common.sh@471 -- # waitforlisten 232525 00:08:04.535 23:52:33 -- common/autotest_common.sh@817 -- # '[' -z 232525 ']' 00:08:04.535 23:52:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.535 23:52:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.535 23:52:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:04.535 23:52:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.535 23:52:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:04.535 23:52:33 -- common/autotest_common.sh@10 -- # set +x 00:08:04.535 [2024-04-26 23:52:33.868324] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:08:04.535 [2024-04-26 23:52:33.868406] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.535 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.535 [2024-04-26 23:52:33.947406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.535 [2024-04-26 23:52:34.024448] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.535 [2024-04-26 23:52:34.024490] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.535 [2024-04-26 23:52:34.024498] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.535 [2024-04-26 23:52:34.024505] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.535 [2024-04-26 23:52:34.024510] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.535 [2024-04-26 23:52:34.024623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.535 [2024-04-26 23:52:34.024741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.535 [2024-04-26 23:52:34.024898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.535 [2024-04-26 23:52:34.024898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.535 23:52:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:04.535 23:52:34 -- common/autotest_common.sh@850 -- # return 0 00:08:04.535 23:52:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:04.535 23:52:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:04.535 23:52:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.535 23:52:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.535 23:52:34 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.535 23:52:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.535 23:52:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.536 [2024-04-26 23:52:34.692348] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.536 23:52:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.536 23:52:34 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:04.536 23:52:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.536 23:52:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.536 [2024-04-26 23:52:34.708514] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:04.536 23:52:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.536 23:52:34 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:04.536 23:52:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.536 23:52:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.536 23:52:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.536 23:52:34 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:04.536 23:52:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.536 23:52:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.536 23:52:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.536 23:52:34 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:04.536 23:52:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.536 23:52:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.536 23:52:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.536 23:52:34 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.536 23:52:34 -- target/referrals.sh@48 -- # jq length 00:08:04.536 23:52:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.536 23:52:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.796 23:52:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.796 23:52:34 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:04.796 23:52:34 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:04.796 23:52:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.796 23:52:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.796 23:52:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.796 23:52:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.796 23:52:34 -- target/referrals.sh@21 -- # sort 00:08:04.796 23:52:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.796 23:52:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.796 23:52:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:04.796 23:52:34 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:04.796 23:52:34 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:04.796 23:52:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.796 23:52:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.796 23:52:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.796 23:52:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.796 23:52:34 -- target/referrals.sh@26 -- # sort 00:08:05.057 23:52:35 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:05.057 23:52:35 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:05.057 23:52:35 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:05.057 23:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.057 23:52:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.057 23:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.057 23:52:35 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:05.057 23:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.057 23:52:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.057 23:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.057 23:52:35 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:05.057 23:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.057 23:52:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.057 23:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.057 23:52:35 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.057 23:52:35 -- target/referrals.sh@56 -- # jq length 00:08:05.057 23:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.057 23:52:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.057 23:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.057 23:52:35 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:05.057 23:52:35 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:05.057 23:52:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.057 23:52:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.057 23:52:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.057 23:52:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.057 23:52:35 -- target/referrals.sh@26 -- # sort 00:08:05.057 23:52:35 -- target/referrals.sh@26 -- # echo 00:08:05.057 23:52:35 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:05.057 23:52:35 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:05.057 23:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.057 23:52:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.057 23:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.057 23:52:35 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:05.057 23:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.057 23:52:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.317 23:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.317 23:52:35 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:05.317 23:52:35 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:05.317 23:52:35 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.317 23:52:35 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:05.317 23:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.317 23:52:35 -- target/referrals.sh@21 -- # sort 00:08:05.317 23:52:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.317 23:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.317 23:52:35 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:05.317 23:52:35 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:05.317 23:52:35 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:05.317 23:52:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.317 23:52:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.317 23:52:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.317 23:52:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.317 23:52:35 -- target/referrals.sh@26 -- # sort 00:08:05.317 23:52:35 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:05.317 23:52:35 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:05.317 23:52:35 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:05.317 23:52:35 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:05.317 23:52:35 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:05.317 23:52:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.317 23:52:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:05.577 23:52:35 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:05.577 23:52:35 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:05.577 23:52:35 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:05.577 23:52:35 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:05.577 23:52:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.577 23:52:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:05.836 23:52:35 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:05.837 23:52:35 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:05.837 23:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.837 23:52:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.837 23:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.837 23:52:35 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:05.837 23:52:35 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:05.837 23:52:35 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.837 23:52:35 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:05.837 23:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.837 23:52:35 -- target/referrals.sh@21 -- # sort 00:08:05.837 23:52:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.837 23:52:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.837 23:52:35 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:05.837 23:52:35 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:05.837 23:52:35 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:05.837 23:52:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.837 23:52:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.837 23:52:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.837 23:52:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.837 23:52:35 -- target/referrals.sh@26 -- # sort 00:08:06.096 23:52:36 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:06.096 23:52:36 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:06.096 23:52:36 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:06.096 23:52:36 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:06.096 23:52:36 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:06.096 23:52:36 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:06.096 23:52:36 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:06.096 23:52:36 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:06.096 23:52:36 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:06.096 23:52:36 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:06.096 23:52:36 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:06.096 23:52:36 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:06.096 23:52:36 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:06.097 23:52:36 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:06.097 23:52:36 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:06.097 23:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.097 23:52:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.097 23:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.097 23:52:36 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:06.097 23:52:36 -- target/referrals.sh@82 -- # jq length 00:08:06.097 23:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.097 23:52:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.097 23:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.357 23:52:36 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:06.357 23:52:36 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:06.357 23:52:36 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:06.357 23:52:36 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:06.357 23:52:36 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:06.357 23:52:36 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:06.357 23:52:36 -- target/referrals.sh@26 -- # sort 00:08:06.357 23:52:36 -- target/referrals.sh@26 -- # echo 00:08:06.357 23:52:36 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:06.357 23:52:36 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:06.357 23:52:36 -- target/referrals.sh@86 -- # nvmftestfini 00:08:06.357 23:52:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:06.357 23:52:36 -- nvmf/common.sh@117 -- # sync 00:08:06.357 23:52:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:06.357 23:52:36 -- nvmf/common.sh@120 -- # set +e 00:08:06.357 23:52:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:06.357 23:52:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:06.357 rmmod nvme_tcp 00:08:06.357 rmmod nvme_fabrics 00:08:06.357 rmmod nvme_keyring 00:08:06.357 23:52:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:06.357 23:52:36 -- nvmf/common.sh@124 -- # set -e 00:08:06.357 23:52:36 -- nvmf/common.sh@125 -- # return 0 00:08:06.357 23:52:36 -- nvmf/common.sh@478 -- # '[' -n 232525 ']' 00:08:06.357 23:52:36 -- nvmf/common.sh@479 -- # killprocess 232525 00:08:06.357 23:52:36 -- common/autotest_common.sh@936 -- # '[' -z 232525 ']' 00:08:06.357 23:52:36 -- common/autotest_common.sh@940 -- # kill -0 232525 00:08:06.357 23:52:36 -- common/autotest_common.sh@941 -- # uname 00:08:06.357 23:52:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:06.357 23:52:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 232525 00:08:06.625 23:52:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:06.625 23:52:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:06.625 23:52:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 232525' 00:08:06.625 killing process with pid 232525 00:08:06.625 23:52:36 -- common/autotest_common.sh@955 -- # kill 232525 00:08:06.625 23:52:36 -- common/autotest_common.sh@960 -- # wait 232525 00:08:06.625 23:52:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:06.625 23:52:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:06.625 23:52:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:06.625 23:52:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.625 23:52:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.625 23:52:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.625 23:52:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.625 23:52:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.169 23:52:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.169 00:08:09.169 real 0m12.535s 00:08:09.169 user 0m13.817s 00:08:09.169 sys 0m6.167s 00:08:09.169 23:52:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:09.169 23:52:38 -- common/autotest_common.sh@10 -- # set +x 00:08:09.169 ************************************ 00:08:09.169 END TEST nvmf_referrals 00:08:09.169 ************************************ 00:08:09.169 23:52:38 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:09.169 23:52:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:09.169 23:52:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.169 23:52:38 -- common/autotest_common.sh@10 -- # set +x 00:08:09.169 ************************************ 00:08:09.169 START TEST nvmf_connect_disconnect 00:08:09.169 ************************************ 00:08:09.169 23:52:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:09.169 * Looking for test storage... 00:08:09.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.169 23:52:39 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.169 23:52:39 -- nvmf/common.sh@7 -- # uname -s 00:08:09.169 23:52:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.169 23:52:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.169 23:52:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.170 23:52:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.170 23:52:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.170 23:52:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.170 23:52:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.170 23:52:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.170 23:52:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.170 23:52:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.170 23:52:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:09.170 23:52:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:09.170 23:52:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.170 23:52:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.170 23:52:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.170 23:52:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.170 23:52:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.170 23:52:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.170 23:52:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.170 23:52:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.170 23:52:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.170 23:52:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.170 23:52:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.170 23:52:39 -- paths/export.sh@5 -- # export PATH 00:08:09.170 23:52:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.170 23:52:39 -- nvmf/common.sh@47 -- # : 0 00:08:09.170 23:52:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.170 23:52:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.170 23:52:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.170 23:52:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.170 23:52:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.170 23:52:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.170 23:52:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.170 23:52:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.170 23:52:39 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:09.170 23:52:39 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:09.170 23:52:39 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:09.170 23:52:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:09.170 23:52:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.170 23:52:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:09.170 23:52:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:09.170 23:52:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:09.170 23:52:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.170 23:52:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.170 23:52:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.170 23:52:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:09.170 23:52:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:09.170 23:52:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.170 23:52:39 -- common/autotest_common.sh@10 -- # set +x 00:08:15.760 23:52:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:15.760 23:52:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.760 23:52:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.760 23:52:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.760 23:52:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.760 23:52:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.760 23:52:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.760 23:52:45 -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.760 23:52:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.760 23:52:45 -- nvmf/common.sh@296 -- # e810=() 00:08:15.760 23:52:45 -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.760 23:52:45 -- nvmf/common.sh@297 -- # x722=() 00:08:15.760 23:52:45 -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.760 23:52:45 -- nvmf/common.sh@298 -- # mlx=() 00:08:15.760 23:52:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.760 23:52:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.760 23:52:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.760 23:52:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.760 23:52:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.760 23:52:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.760 23:52:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.760 23:52:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.760 23:52:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.760 23:52:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.760 23:52:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.760 23:52:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.760 23:52:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.760 23:52:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:15.760 23:52:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.760 23:52:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.760 23:52:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:15.760 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:15.760 23:52:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.760 23:52:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:15.760 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:15.760 23:52:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.760 23:52:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.760 23:52:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.760 23:52:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:15.760 23:52:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.760 23:52:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:15.760 Found net devices under 0000:31:00.0: cvl_0_0 00:08:15.760 23:52:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.760 23:52:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.760 23:52:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.760 23:52:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:15.760 23:52:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.760 23:52:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:15.760 Found net devices under 0000:31:00.1: cvl_0_1 00:08:15.760 23:52:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.760 23:52:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:15.760 23:52:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:15.760 23:52:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:15.760 23:52:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:15.760 23:52:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.760 23:52:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.760 23:52:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.760 23:52:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:15.760 23:52:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.760 23:52:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.760 23:52:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:15.760 23:52:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.760 23:52:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.760 23:52:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:15.760 23:52:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.760 23:52:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.760 23:52:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.021 23:52:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.021 23:52:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.021 23:52:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:16.021 23:52:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.021 23:52:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.021 23:52:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.282 23:52:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:16.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:08:16.282 00:08:16.282 --- 10.0.0.2 ping statistics --- 00:08:16.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.282 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:08:16.282 23:52:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.530 ms 00:08:16.282 00:08:16.282 --- 10.0.0.1 ping statistics --- 00:08:16.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.282 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:08:16.282 23:52:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.282 23:52:46 -- nvmf/common.sh@411 -- # return 0 00:08:16.282 23:52:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:16.282 23:52:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.282 23:52:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:16.282 23:52:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:16.282 23:52:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.282 23:52:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:16.282 23:52:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:16.282 23:52:46 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:16.282 23:52:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:16.282 23:52:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:16.282 23:52:46 -- common/autotest_common.sh@10 -- # set +x 00:08:16.282 23:52:46 -- nvmf/common.sh@470 -- # nvmfpid=237540 00:08:16.282 23:52:46 -- nvmf/common.sh@471 -- # waitforlisten 237540 00:08:16.282 23:52:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.282 23:52:46 -- common/autotest_common.sh@817 -- # '[' -z 237540 ']' 00:08:16.282 23:52:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.282 23:52:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:16.282 23:52:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.282 23:52:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:16.282 23:52:46 -- common/autotest_common.sh@10 -- # set +x 00:08:16.282 [2024-04-26 23:52:46.359625] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:08:16.283 [2024-04-26 23:52:46.359690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.283 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.283 [2024-04-26 23:52:46.431258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.543 [2024-04-26 23:52:46.506175] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.543 [2024-04-26 23:52:46.506214] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.543 [2024-04-26 23:52:46.506221] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.543 [2024-04-26 23:52:46.506232] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.543 [2024-04-26 23:52:46.506238] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.543 [2024-04-26 23:52:46.506350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.543 [2024-04-26 23:52:46.506468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.543 [2024-04-26 23:52:46.506595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.543 [2024-04-26 23:52:46.506597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.114 23:52:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:17.114 23:52:47 -- common/autotest_common.sh@850 -- # return 0 00:08:17.114 23:52:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:17.114 23:52:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:17.114 23:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.114 23:52:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.114 23:52:47 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:17.114 23:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.114 23:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.114 [2024-04-26 23:52:47.189412] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.114 23:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.114 23:52:47 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:17.114 23:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.114 23:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.114 23:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.114 23:52:47 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:17.114 23:52:47 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:17.114 23:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.114 23:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.114 23:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.114 23:52:47 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:17.114 23:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.114 23:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.114 23:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.114 23:52:47 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.114 23:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.114 23:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.114 [2024-04-26 23:52:47.248763] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.114 23:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.114 23:52:47 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:17.114 23:52:47 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:17.114 23:52:47 -- target/connect_disconnect.sh@34 -- # set +x 00:08:21.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.592 23:53:05 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:35.592 23:53:05 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:35.592 23:53:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:35.592 23:53:05 -- nvmf/common.sh@117 -- # sync 00:08:35.592 23:53:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.592 23:53:05 -- nvmf/common.sh@120 -- # set +e 00:08:35.592 23:53:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.592 23:53:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.592 rmmod nvme_tcp 00:08:35.592 rmmod nvme_fabrics 00:08:35.592 rmmod nvme_keyring 00:08:35.592 23:53:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.592 23:53:05 -- nvmf/common.sh@124 -- # set -e 00:08:35.592 23:53:05 -- nvmf/common.sh@125 -- # return 0 00:08:35.592 23:53:05 -- nvmf/common.sh@478 -- # '[' -n 237540 ']' 00:08:35.592 23:53:05 -- nvmf/common.sh@479 -- # killprocess 237540 00:08:35.592 23:53:05 -- common/autotest_common.sh@936 -- # '[' -z 237540 ']' 00:08:35.592 23:53:05 -- common/autotest_common.sh@940 -- # kill -0 237540 00:08:35.592 23:53:05 -- common/autotest_common.sh@941 -- # uname 00:08:35.592 23:53:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:35.592 23:53:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 237540 00:08:35.592 23:53:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:35.592 23:53:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:35.592 23:53:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 237540' 00:08:35.592 killing process with pid 237540 00:08:35.592 23:53:05 -- common/autotest_common.sh@955 -- # kill 237540 00:08:35.592 23:53:05 -- common/autotest_common.sh@960 -- # wait 237540 00:08:35.852 23:53:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:35.852 23:53:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:35.852 23:53:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:35.852 23:53:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.852 23:53:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.852 23:53:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.852 23:53:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.852 23:53:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.761 23:53:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.761 00:08:37.761 real 0m28.922s 00:08:37.761 user 1m18.992s 00:08:37.761 sys 0m6.574s 00:08:37.761 23:53:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:37.761 23:53:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.761 ************************************ 00:08:37.761 END TEST nvmf_connect_disconnect 00:08:37.761 ************************************ 00:08:37.761 23:53:07 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:37.761 23:53:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:37.761 23:53:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.761 23:53:07 -- common/autotest_common.sh@10 -- # set +x 00:08:38.022 ************************************ 00:08:38.022 START TEST nvmf_multitarget 00:08:38.022 ************************************ 00:08:38.022 23:53:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:38.022 * Looking for test storage... 00:08:38.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.022 23:53:08 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.022 23:53:08 -- nvmf/common.sh@7 -- # uname -s 00:08:38.022 23:53:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.022 23:53:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.022 23:53:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.022 23:53:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.022 23:53:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.022 23:53:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.022 23:53:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.022 23:53:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.022 23:53:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.022 23:53:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.022 23:53:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:38.022 23:53:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:38.022 23:53:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.022 23:53:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.022 23:53:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.022 23:53:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.022 23:53:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.022 23:53:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.022 23:53:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.022 23:53:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.022 23:53:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.022 23:53:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.022 23:53:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.022 23:53:08 -- paths/export.sh@5 -- # export PATH 00:08:38.022 23:53:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.022 23:53:08 -- nvmf/common.sh@47 -- # : 0 00:08:38.022 23:53:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.022 23:53:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.022 23:53:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.022 23:53:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.023 23:53:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.023 23:53:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.023 23:53:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.023 23:53:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.023 23:53:08 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:38.023 23:53:08 -- target/multitarget.sh@15 -- # nvmftestinit 00:08:38.023 23:53:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:38.023 23:53:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.023 23:53:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:38.023 23:53:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:38.023 23:53:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:38.023 23:53:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.023 23:53:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.023 23:53:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.023 23:53:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:38.023 23:53:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:38.023 23:53:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.023 23:53:08 -- common/autotest_common.sh@10 -- # set +x 00:08:46.166 23:53:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:46.166 23:53:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:46.166 23:53:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:46.166 23:53:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:46.166 23:53:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:46.166 23:53:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:46.166 23:53:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:46.166 23:53:15 -- nvmf/common.sh@295 -- # net_devs=() 00:08:46.166 23:53:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:46.166 23:53:15 -- nvmf/common.sh@296 -- # e810=() 00:08:46.166 23:53:15 -- nvmf/common.sh@296 -- # local -ga e810 00:08:46.166 23:53:15 -- nvmf/common.sh@297 -- # x722=() 00:08:46.166 23:53:15 -- nvmf/common.sh@297 -- # local -ga x722 00:08:46.166 23:53:15 -- nvmf/common.sh@298 -- # mlx=() 00:08:46.166 23:53:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:46.166 23:53:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.166 23:53:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.166 23:53:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.166 23:53:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.166 23:53:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.166 23:53:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.166 23:53:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.166 23:53:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.166 23:53:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.166 23:53:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.166 23:53:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.166 23:53:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:46.166 23:53:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:46.166 23:53:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:46.166 23:53:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.166 23:53:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:46.166 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:46.166 23:53:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.166 23:53:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:46.166 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:46.166 23:53:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:46.166 23:53:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.166 23:53:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.166 23:53:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:46.166 23:53:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.166 23:53:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:46.166 Found net devices under 0000:31:00.0: cvl_0_0 00:08:46.166 23:53:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.166 23:53:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.166 23:53:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.166 23:53:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:46.166 23:53:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.166 23:53:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:46.166 Found net devices under 0000:31:00.1: cvl_0_1 00:08:46.166 23:53:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.166 23:53:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:46.166 23:53:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:46.166 23:53:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:46.166 23:53:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:46.166 23:53:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.166 23:53:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.166 23:53:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.166 23:53:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:46.166 23:53:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.167 23:53:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.167 23:53:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:46.167 23:53:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.167 23:53:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.167 23:53:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:46.167 23:53:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:46.167 23:53:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.167 23:53:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.167 23:53:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.167 23:53:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.167 23:53:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.167 23:53:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.167 23:53:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.167 23:53:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.167 23:53:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:46.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:08:46.167 00:08:46.167 --- 10.0.0.2 ping statistics --- 00:08:46.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.167 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:08:46.167 23:53:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:08:46.167 00:08:46.167 --- 10.0.0.1 ping statistics --- 00:08:46.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.167 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:08:46.167 23:53:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.167 23:53:15 -- nvmf/common.sh@411 -- # return 0 00:08:46.167 23:53:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:46.167 23:53:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.167 23:53:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:46.167 23:53:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:46.167 23:53:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.167 23:53:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:46.167 23:53:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:46.167 23:53:15 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:46.167 23:53:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:46.167 23:53:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:46.167 23:53:15 -- common/autotest_common.sh@10 -- # set +x 00:08:46.167 23:53:15 -- nvmf/common.sh@470 -- # nvmfpid=245701 00:08:46.167 23:53:15 -- nvmf/common.sh@471 -- # waitforlisten 245701 00:08:46.167 23:53:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.167 23:53:15 -- common/autotest_common.sh@817 -- # '[' -z 245701 ']' 00:08:46.167 23:53:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.167 23:53:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:46.167 23:53:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.167 23:53:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:46.167 23:53:15 -- common/autotest_common.sh@10 -- # set +x 00:08:46.167 [2024-04-26 23:53:15.557953] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:08:46.167 [2024-04-26 23:53:15.558018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.167 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.167 [2024-04-26 23:53:15.630576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.167 [2024-04-26 23:53:15.697926] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.167 [2024-04-26 23:53:15.697966] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.167 [2024-04-26 23:53:15.697978] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.167 [2024-04-26 23:53:15.697984] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.167 [2024-04-26 23:53:15.697990] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.167 [2024-04-26 23:53:15.698032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.167 [2024-04-26 23:53:15.698150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.167 [2024-04-26 23:53:15.698306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.167 [2024-04-26 23:53:15.698308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.167 23:53:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:46.167 23:53:16 -- common/autotest_common.sh@850 -- # return 0 00:08:46.167 23:53:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:46.167 23:53:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:46.167 23:53:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.167 23:53:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.167 23:53:16 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:46.167 23:53:16 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:46.167 23:53:16 -- target/multitarget.sh@21 -- # jq length 00:08:46.428 23:53:16 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:46.428 23:53:16 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:46.428 "nvmf_tgt_1" 00:08:46.428 23:53:16 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:46.689 "nvmf_tgt_2" 00:08:46.689 23:53:16 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:46.689 23:53:16 -- target/multitarget.sh@28 -- # jq length 00:08:46.689 23:53:16 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:46.689 23:53:16 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:46.689 true 00:08:46.689 23:53:16 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:46.949 true 00:08:46.949 23:53:16 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:46.949 23:53:16 -- target/multitarget.sh@35 -- # jq length 00:08:46.949 23:53:17 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:46.949 23:53:17 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:46.949 23:53:17 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:46.949 23:53:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:46.949 23:53:17 -- nvmf/common.sh@117 -- # sync 00:08:46.949 23:53:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.949 23:53:17 -- nvmf/common.sh@120 -- # set +e 00:08:46.949 23:53:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.949 23:53:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.949 rmmod nvme_tcp 00:08:46.949 rmmod nvme_fabrics 00:08:46.949 rmmod nvme_keyring 00:08:46.949 23:53:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:47.209 23:53:17 -- nvmf/common.sh@124 -- # set -e 00:08:47.209 23:53:17 -- nvmf/common.sh@125 -- # return 0 00:08:47.209 23:53:17 -- nvmf/common.sh@478 -- # '[' -n 245701 ']' 00:08:47.209 23:53:17 -- nvmf/common.sh@479 -- # killprocess 245701 00:08:47.209 23:53:17 -- common/autotest_common.sh@936 -- # '[' -z 245701 ']' 00:08:47.209 23:53:17 -- common/autotest_common.sh@940 -- # kill -0 245701 00:08:47.210 23:53:17 -- common/autotest_common.sh@941 -- # uname 00:08:47.210 23:53:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:47.210 23:53:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 245701 00:08:47.210 23:53:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:47.210 23:53:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:47.210 23:53:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 245701' 00:08:47.210 killing process with pid 245701 00:08:47.210 23:53:17 -- common/autotest_common.sh@955 -- # kill 245701 00:08:47.210 23:53:17 -- common/autotest_common.sh@960 -- # wait 245701 00:08:47.210 23:53:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:47.210 23:53:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:47.210 23:53:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:47.210 23:53:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:47.210 23:53:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:47.210 23:53:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.210 23:53:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.210 23:53:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.761 23:53:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:49.762 00:08:49.762 real 0m11.337s 00:08:49.762 user 0m9.427s 00:08:49.762 sys 0m5.807s 00:08:49.762 23:53:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:49.762 23:53:19 -- common/autotest_common.sh@10 -- # set +x 00:08:49.762 ************************************ 00:08:49.762 END TEST nvmf_multitarget 00:08:49.762 ************************************ 00:08:49.762 23:53:19 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:49.762 23:53:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:49.762 23:53:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.762 23:53:19 -- common/autotest_common.sh@10 -- # set +x 00:08:49.762 ************************************ 00:08:49.762 START TEST nvmf_rpc 00:08:49.762 ************************************ 00:08:49.762 23:53:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:49.762 * Looking for test storage... 00:08:49.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.762 23:53:19 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.762 23:53:19 -- nvmf/common.sh@7 -- # uname -s 00:08:49.762 23:53:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.762 23:53:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.762 23:53:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.762 23:53:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.762 23:53:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.762 23:53:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.762 23:53:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.762 23:53:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.762 23:53:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.762 23:53:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.762 23:53:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:49.762 23:53:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:49.762 23:53:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.762 23:53:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.762 23:53:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.762 23:53:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.762 23:53:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.762 23:53:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.762 23:53:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.762 23:53:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.762 23:53:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.762 23:53:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.762 23:53:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.762 23:53:19 -- paths/export.sh@5 -- # export PATH 00:08:49.762 23:53:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.762 23:53:19 -- nvmf/common.sh@47 -- # : 0 00:08:49.762 23:53:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.762 23:53:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.762 23:53:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.762 23:53:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.762 23:53:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.762 23:53:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.762 23:53:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.762 23:53:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.762 23:53:19 -- target/rpc.sh@11 -- # loops=5 00:08:49.762 23:53:19 -- target/rpc.sh@23 -- # nvmftestinit 00:08:49.762 23:53:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:49.762 23:53:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.762 23:53:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:49.762 23:53:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:49.762 23:53:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:49.762 23:53:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.762 23:53:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.762 23:53:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.762 23:53:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:49.762 23:53:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:49.762 23:53:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.762 23:53:19 -- common/autotest_common.sh@10 -- # set +x 00:08:57.913 23:53:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:57.913 23:53:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.913 23:53:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.913 23:53:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.913 23:53:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.913 23:53:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.913 23:53:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.913 23:53:26 -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.913 23:53:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.913 23:53:26 -- nvmf/common.sh@296 -- # e810=() 00:08:57.913 23:53:26 -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.913 23:53:26 -- nvmf/common.sh@297 -- # x722=() 00:08:57.913 23:53:26 -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.913 23:53:26 -- nvmf/common.sh@298 -- # mlx=() 00:08:57.913 23:53:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.913 23:53:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.913 23:53:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.913 23:53:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.913 23:53:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.913 23:53:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.913 23:53:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.913 23:53:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.913 23:53:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.913 23:53:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.913 23:53:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.913 23:53:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.913 23:53:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.913 23:53:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.913 23:53:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.913 23:53:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.913 23:53:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:57.913 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:57.913 23:53:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.913 23:53:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:57.913 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:57.913 23:53:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.913 23:53:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.913 23:53:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.913 23:53:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:57.913 23:53:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.913 23:53:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:57.913 Found net devices under 0000:31:00.0: cvl_0_0 00:08:57.913 23:53:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.913 23:53:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.913 23:53:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.913 23:53:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:57.913 23:53:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.913 23:53:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:57.913 Found net devices under 0000:31:00.1: cvl_0_1 00:08:57.913 23:53:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.913 23:53:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:57.913 23:53:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:57.913 23:53:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:57.913 23:53:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:57.913 23:53:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.913 23:53:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.913 23:53:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.913 23:53:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.913 23:53:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.913 23:53:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.913 23:53:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.913 23:53:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.914 23:53:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.914 23:53:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.914 23:53:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.914 23:53:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.914 23:53:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.914 23:53:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.914 23:53:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.914 23:53:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.914 23:53:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.914 23:53:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.914 23:53:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.914 23:53:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:57.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:08:57.914 00:08:57.914 --- 10.0.0.2 ping statistics --- 00:08:57.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.914 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:08:57.914 23:53:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:08:57.914 00:08:57.914 --- 10.0.0.1 ping statistics --- 00:08:57.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.914 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:08:57.914 23:53:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.914 23:53:27 -- nvmf/common.sh@411 -- # return 0 00:08:57.914 23:53:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:57.914 23:53:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.914 23:53:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:57.914 23:53:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:57.914 23:53:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.914 23:53:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:57.914 23:53:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:57.914 23:53:27 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:57.914 23:53:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:57.914 23:53:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:57.914 23:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:57.914 23:53:27 -- nvmf/common.sh@470 -- # nvmfpid=250325 00:08:57.914 23:53:27 -- nvmf/common.sh@471 -- # waitforlisten 250325 00:08:57.914 23:53:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.914 23:53:27 -- common/autotest_common.sh@817 -- # '[' -z 250325 ']' 00:08:57.914 23:53:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.914 23:53:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:57.914 23:53:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.914 23:53:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:57.914 23:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:57.914 [2024-04-26 23:53:27.129528] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:08:57.914 [2024-04-26 23:53:27.129593] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.914 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.914 [2024-04-26 23:53:27.201254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.914 [2024-04-26 23:53:27.275965] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.914 [2024-04-26 23:53:27.276004] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.914 [2024-04-26 23:53:27.276012] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.914 [2024-04-26 23:53:27.276019] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.914 [2024-04-26 23:53:27.276024] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.914 [2024-04-26 23:53:27.276134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.914 [2024-04-26 23:53:27.276260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.914 [2024-04-26 23:53:27.276422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.914 [2024-04-26 23:53:27.276422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.914 23:53:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:57.914 23:53:27 -- common/autotest_common.sh@850 -- # return 0 00:08:57.914 23:53:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:57.914 23:53:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:57.914 23:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:57.914 23:53:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.914 23:53:27 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:57.914 23:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.914 23:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:57.914 23:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.914 23:53:27 -- target/rpc.sh@26 -- # stats='{ 00:08:57.914 "tick_rate": 2400000000, 00:08:57.914 "poll_groups": [ 00:08:57.914 { 00:08:57.914 "name": "nvmf_tgt_poll_group_0", 00:08:57.914 "admin_qpairs": 0, 00:08:57.914 "io_qpairs": 0, 00:08:57.914 "current_admin_qpairs": 0, 00:08:57.914 "current_io_qpairs": 0, 00:08:57.914 "pending_bdev_io": 0, 00:08:57.914 "completed_nvme_io": 0, 00:08:57.914 "transports": [] 00:08:57.914 }, 00:08:57.914 { 00:08:57.914 "name": "nvmf_tgt_poll_group_1", 00:08:57.914 "admin_qpairs": 0, 00:08:57.914 "io_qpairs": 0, 00:08:57.914 "current_admin_qpairs": 0, 00:08:57.914 "current_io_qpairs": 0, 00:08:57.914 "pending_bdev_io": 0, 00:08:57.914 "completed_nvme_io": 0, 00:08:57.914 "transports": [] 00:08:57.914 }, 00:08:57.914 { 00:08:57.914 "name": "nvmf_tgt_poll_group_2", 00:08:57.914 "admin_qpairs": 0, 00:08:57.914 "io_qpairs": 0, 00:08:57.914 "current_admin_qpairs": 0, 00:08:57.914 "current_io_qpairs": 0, 00:08:57.914 "pending_bdev_io": 0, 00:08:57.914 "completed_nvme_io": 0, 00:08:57.914 "transports": [] 00:08:57.914 }, 00:08:57.914 { 00:08:57.914 "name": "nvmf_tgt_poll_group_3", 00:08:57.914 "admin_qpairs": 0, 00:08:57.914 "io_qpairs": 0, 00:08:57.914 "current_admin_qpairs": 0, 00:08:57.914 "current_io_qpairs": 0, 00:08:57.914 "pending_bdev_io": 0, 00:08:57.914 "completed_nvme_io": 0, 00:08:57.914 "transports": [] 00:08:57.914 } 00:08:57.914 ] 00:08:57.914 }' 00:08:57.914 23:53:27 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:57.914 23:53:27 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:57.914 23:53:27 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:57.914 23:53:27 -- target/rpc.sh@15 -- # wc -l 00:08:57.914 23:53:28 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:57.914 23:53:28 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:57.914 23:53:28 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:57.914 23:53:28 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.914 23:53:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.914 23:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.914 [2024-04-26 23:53:28.068751] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.914 23:53:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.914 23:53:28 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:57.914 23:53:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.914 23:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.914 23:53:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.914 23:53:28 -- target/rpc.sh@33 -- # stats='{ 00:08:57.914 "tick_rate": 2400000000, 00:08:57.914 "poll_groups": [ 00:08:57.914 { 00:08:57.914 "name": "nvmf_tgt_poll_group_0", 00:08:57.914 "admin_qpairs": 0, 00:08:57.914 "io_qpairs": 0, 00:08:57.914 "current_admin_qpairs": 0, 00:08:57.914 "current_io_qpairs": 0, 00:08:57.914 "pending_bdev_io": 0, 00:08:57.914 "completed_nvme_io": 0, 00:08:57.914 "transports": [ 00:08:57.914 { 00:08:57.914 "trtype": "TCP" 00:08:57.914 } 00:08:57.914 ] 00:08:57.914 }, 00:08:57.914 { 00:08:57.914 "name": "nvmf_tgt_poll_group_1", 00:08:57.914 "admin_qpairs": 0, 00:08:57.914 "io_qpairs": 0, 00:08:57.914 "current_admin_qpairs": 0, 00:08:57.914 "current_io_qpairs": 0, 00:08:57.914 "pending_bdev_io": 0, 00:08:57.914 "completed_nvme_io": 0, 00:08:57.914 "transports": [ 00:08:57.914 { 00:08:57.914 "trtype": "TCP" 00:08:57.914 } 00:08:57.914 ] 00:08:57.914 }, 00:08:57.914 { 00:08:57.914 "name": "nvmf_tgt_poll_group_2", 00:08:57.914 "admin_qpairs": 0, 00:08:57.914 "io_qpairs": 0, 00:08:57.914 "current_admin_qpairs": 0, 00:08:57.914 "current_io_qpairs": 0, 00:08:57.914 "pending_bdev_io": 0, 00:08:57.914 "completed_nvme_io": 0, 00:08:57.914 "transports": [ 00:08:57.914 { 00:08:57.914 "trtype": "TCP" 00:08:57.914 } 00:08:57.914 ] 00:08:57.914 }, 00:08:57.914 { 00:08:57.914 "name": "nvmf_tgt_poll_group_3", 00:08:57.914 "admin_qpairs": 0, 00:08:57.914 "io_qpairs": 0, 00:08:57.914 "current_admin_qpairs": 0, 00:08:57.914 "current_io_qpairs": 0, 00:08:57.914 "pending_bdev_io": 0, 00:08:57.914 "completed_nvme_io": 0, 00:08:57.914 "transports": [ 00:08:57.914 { 00:08:57.914 "trtype": "TCP" 00:08:57.914 } 00:08:57.914 ] 00:08:57.914 } 00:08:57.914 ] 00:08:57.914 }' 00:08:57.914 23:53:28 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:57.914 23:53:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:57.914 23:53:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:57.914 23:53:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:58.176 23:53:28 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:58.176 23:53:28 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:58.176 23:53:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:58.176 23:53:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:58.176 23:53:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:58.176 23:53:28 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:58.176 23:53:28 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:58.176 23:53:28 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:58.176 23:53:28 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:58.176 23:53:28 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:58.176 23:53:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.176 23:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 Malloc1 00:08:58.176 23:53:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.176 23:53:28 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:58.176 23:53:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.176 23:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 23:53:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.176 23:53:28 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:58.176 23:53:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.176 23:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 23:53:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.176 23:53:28 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:58.176 23:53:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.176 23:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 23:53:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.176 23:53:28 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.176 23:53:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.176 23:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 [2024-04-26 23:53:28.252809] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.176 23:53:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.176 23:53:28 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:08:58.176 23:53:28 -- common/autotest_common.sh@638 -- # local es=0 00:08:58.176 23:53:28 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:08:58.176 23:53:28 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:58.176 23:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:58.176 23:53:28 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:58.176 23:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:58.176 23:53:28 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:58.176 23:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:58.176 23:53:28 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:58.176 23:53:28 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:58.176 23:53:28 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:08:58.176 [2024-04-26 23:53:28.279615] ctrlr.c: 780:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:08:58.176 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:58.176 could not add new controller: failed to write to nvme-fabrics device 00:08:58.176 23:53:28 -- common/autotest_common.sh@641 -- # es=1 00:08:58.176 23:53:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:58.176 23:53:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:58.176 23:53:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:58.176 23:53:28 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:58.176 23:53:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.176 23:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 23:53:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.176 23:53:28 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.091 23:53:29 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:00.091 23:53:29 -- common/autotest_common.sh@1184 -- # local i=0 00:09:00.091 23:53:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:00.091 23:53:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:00.091 23:53:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:02.003 23:53:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:02.003 23:53:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:02.003 23:53:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:02.003 23:53:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:02.004 23:53:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:02.004 23:53:31 -- common/autotest_common.sh@1194 -- # return 0 00:09:02.004 23:53:31 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:02.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.004 23:53:31 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:02.004 23:53:31 -- common/autotest_common.sh@1205 -- # local i=0 00:09:02.004 23:53:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:02.004 23:53:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.004 23:53:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:02.004 23:53:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.004 23:53:31 -- common/autotest_common.sh@1217 -- # return 0 00:09:02.004 23:53:31 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:02.004 23:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.004 23:53:31 -- common/autotest_common.sh@10 -- # set +x 00:09:02.004 23:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.004 23:53:31 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.004 23:53:31 -- common/autotest_common.sh@638 -- # local es=0 00:09:02.004 23:53:31 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.004 23:53:31 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:02.004 23:53:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:02.004 23:53:31 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:02.004 23:53:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:02.004 23:53:31 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:02.004 23:53:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:02.004 23:53:31 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:02.004 23:53:31 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:02.004 23:53:31 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.004 [2024-04-26 23:53:31.995860] ctrlr.c: 780:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:02.004 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:02.004 could not add new controller: failed to write to nvme-fabrics device 00:09:02.004 23:53:32 -- common/autotest_common.sh@641 -- # es=1 00:09:02.004 23:53:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:02.004 23:53:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:02.004 23:53:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:02.004 23:53:32 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:02.004 23:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.004 23:53:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.004 23:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.004 23:53:32 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.395 23:53:33 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.395 23:53:33 -- common/autotest_common.sh@1184 -- # local i=0 00:09:03.395 23:53:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.395 23:53:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:03.395 23:53:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:05.329 23:53:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:05.329 23:53:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:05.329 23:53:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.329 23:53:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:05.329 23:53:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.329 23:53:35 -- common/autotest_common.sh@1194 -- # return 0 00:09:05.329 23:53:35 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.589 23:53:35 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.589 23:53:35 -- common/autotest_common.sh@1205 -- # local i=0 00:09:05.589 23:53:35 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:05.589 23:53:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.589 23:53:35 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:05.589 23:53:35 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.589 23:53:35 -- common/autotest_common.sh@1217 -- # return 0 00:09:05.589 23:53:35 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.589 23:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.589 23:53:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.589 23:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.589 23:53:35 -- target/rpc.sh@81 -- # seq 1 5 00:09:05.589 23:53:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:05.589 23:53:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.589 23:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.589 23:53:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.589 23:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.589 23:53:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.589 23:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.589 23:53:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.589 [2024-04-26 23:53:35.699912] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.589 23:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.589 23:53:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:05.589 23:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.589 23:53:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.589 23:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.589 23:53:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.589 23:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.589 23:53:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.589 23:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.589 23:53:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.500 23:53:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.500 23:53:37 -- common/autotest_common.sh@1184 -- # local i=0 00:09:07.500 23:53:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.500 23:53:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:07.500 23:53:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:09.411 23:53:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:09.411 23:53:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:09.411 23:53:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.411 23:53:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:09.411 23:53:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.411 23:53:39 -- common/autotest_common.sh@1194 -- # return 0 00:09:09.411 23:53:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.411 23:53:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.411 23:53:39 -- common/autotest_common.sh@1205 -- # local i=0 00:09:09.411 23:53:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:09.411 23:53:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.411 23:53:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:09.411 23:53:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.411 23:53:39 -- common/autotest_common.sh@1217 -- # return 0 00:09:09.411 23:53:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:09.411 23:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.411 23:53:39 -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 23:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.411 23:53:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.411 23:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.411 23:53:39 -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 23:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.411 23:53:39 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:09.411 23:53:39 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:09.411 23:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.411 23:53:39 -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 23:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.411 23:53:39 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.411 23:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.411 23:53:39 -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 [2024-04-26 23:53:39.458453] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.411 23:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.411 23:53:39 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:09.411 23:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.411 23:53:39 -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 23:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.411 23:53:39 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:09.411 23:53:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.411 23:53:39 -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 23:53:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.411 23:53:39 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.793 23:53:40 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.793 23:53:40 -- common/autotest_common.sh@1184 -- # local i=0 00:09:10.793 23:53:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.793 23:53:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:10.793 23:53:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:13.337 23:53:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:13.337 23:53:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:13.337 23:53:42 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.337 23:53:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:13.337 23:53:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.337 23:53:42 -- common/autotest_common.sh@1194 -- # return 0 00:09:13.337 23:53:42 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.337 23:53:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.337 23:53:43 -- common/autotest_common.sh@1205 -- # local i=0 00:09:13.337 23:53:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:13.337 23:53:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.337 23:53:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:13.337 23:53:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.337 23:53:43 -- common/autotest_common.sh@1217 -- # return 0 00:09:13.337 23:53:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.337 23:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.337 23:53:43 -- common/autotest_common.sh@10 -- # set +x 00:09:13.337 23:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.337 23:53:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.337 23:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.337 23:53:43 -- common/autotest_common.sh@10 -- # set +x 00:09:13.337 23:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.337 23:53:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:13.337 23:53:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:13.337 23:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.337 23:53:43 -- common/autotest_common.sh@10 -- # set +x 00:09:13.337 23:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.337 23:53:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.337 23:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.337 23:53:43 -- common/autotest_common.sh@10 -- # set +x 00:09:13.337 [2024-04-26 23:53:43.278656] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.337 23:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.337 23:53:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:13.337 23:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.337 23:53:43 -- common/autotest_common.sh@10 -- # set +x 00:09:13.337 23:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.337 23:53:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:13.337 23:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.337 23:53:43 -- common/autotest_common.sh@10 -- # set +x 00:09:13.337 23:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.337 23:53:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.726 23:53:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.726 23:53:44 -- common/autotest_common.sh@1184 -- # local i=0 00:09:14.726 23:53:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.726 23:53:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:14.726 23:53:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:16.725 23:53:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:16.725 23:53:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:16.725 23:53:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.725 23:53:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:16.725 23:53:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.725 23:53:46 -- common/autotest_common.sh@1194 -- # return 0 00:09:16.725 23:53:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.725 23:53:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.725 23:53:46 -- common/autotest_common.sh@1205 -- # local i=0 00:09:16.725 23:53:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:16.725 23:53:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.725 23:53:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:16.725 23:53:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.986 23:53:46 -- common/autotest_common.sh@1217 -- # return 0 00:09:16.986 23:53:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.986 23:53:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.986 23:53:46 -- common/autotest_common.sh@10 -- # set +x 00:09:16.986 23:53:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.986 23:53:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.986 23:53:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.986 23:53:46 -- common/autotest_common.sh@10 -- # set +x 00:09:16.986 23:53:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.986 23:53:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:16.986 23:53:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:16.986 23:53:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.986 23:53:46 -- common/autotest_common.sh@10 -- # set +x 00:09:16.986 23:53:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.986 23:53:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.986 23:53:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.986 23:53:46 -- common/autotest_common.sh@10 -- # set +x 00:09:16.986 [2024-04-26 23:53:46.993087] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.986 23:53:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.986 23:53:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:16.986 23:53:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.986 23:53:46 -- common/autotest_common.sh@10 -- # set +x 00:09:16.986 23:53:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.986 23:53:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:16.986 23:53:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.986 23:53:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.986 23:53:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.986 23:53:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.373 23:53:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.373 23:53:48 -- common/autotest_common.sh@1184 -- # local i=0 00:09:18.373 23:53:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.374 23:53:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:18.374 23:53:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:20.922 23:53:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:20.922 23:53:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:20.922 23:53:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.922 23:53:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:20.922 23:53:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.922 23:53:50 -- common/autotest_common.sh@1194 -- # return 0 00:09:20.922 23:53:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.922 23:53:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.922 23:53:50 -- common/autotest_common.sh@1205 -- # local i=0 00:09:20.922 23:53:50 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:20.922 23:53:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.922 23:53:50 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:20.922 23:53:50 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.922 23:53:50 -- common/autotest_common.sh@1217 -- # return 0 00:09:20.922 23:53:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.922 23:53:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.922 23:53:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.922 23:53:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.922 23:53:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.922 23:53:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.922 23:53:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.922 23:53:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.922 23:53:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:20.922 23:53:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.922 23:53:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.922 23:53:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.922 23:53:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.922 23:53:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.922 23:53:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.922 23:53:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.922 [2024-04-26 23:53:50.748172] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.922 23:53:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.922 23:53:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:20.922 23:53:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.922 23:53:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.922 23:53:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.922 23:53:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.922 23:53:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.922 23:53:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.922 23:53:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.922 23:53:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.310 23:53:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:22.310 23:53:52 -- common/autotest_common.sh@1184 -- # local i=0 00:09:22.310 23:53:52 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.310 23:53:52 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:22.310 23:53:52 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:24.226 23:53:54 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:24.226 23:53:54 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:24.226 23:53:54 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.226 23:53:54 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:24.226 23:53:54 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.226 23:53:54 -- common/autotest_common.sh@1194 -- # return 0 00:09:24.226 23:53:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.226 23:53:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.226 23:53:54 -- common/autotest_common.sh@1205 -- # local i=0 00:09:24.226 23:53:54 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:24.226 23:53:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.226 23:53:54 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:24.226 23:53:54 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.226 23:53:54 -- common/autotest_common.sh@1217 -- # return 0 00:09:24.226 23:53:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:24.226 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.226 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.226 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.226 23:53:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.226 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.226 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.487 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.487 23:53:54 -- target/rpc.sh@99 -- # seq 1 5 00:09:24.487 23:53:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.487 23:53:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.487 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.487 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.487 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.487 23:53:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.487 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.487 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.487 [2024-04-26 23:53:54.472865] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.487 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.487 23:53:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.487 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.488 23:53:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 [2024-04-26 23:53:54.537014] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.488 23:53:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 [2024-04-26 23:53:54.593196] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.488 23:53:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 [2024-04-26 23:53:54.653377] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.488 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.488 23:53:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.488 23:53:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.488 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.488 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.749 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.749 23:53:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.749 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.749 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.749 [2024-04-26 23:53:54.717587] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.749 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.749 23:53:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.749 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.749 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.749 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.749 23:53:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.749 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.749 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.749 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.749 23:53:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.749 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.749 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.749 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.749 23:53:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.749 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.749 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.749 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.749 23:53:54 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:24.749 23:53:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.749 23:53:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.749 23:53:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.749 23:53:54 -- target/rpc.sh@110 -- # stats='{ 00:09:24.749 "tick_rate": 2400000000, 00:09:24.749 "poll_groups": [ 00:09:24.749 { 00:09:24.749 "name": "nvmf_tgt_poll_group_0", 00:09:24.749 "admin_qpairs": 0, 00:09:24.749 "io_qpairs": 224, 00:09:24.749 "current_admin_qpairs": 0, 00:09:24.749 "current_io_qpairs": 0, 00:09:24.749 "pending_bdev_io": 0, 00:09:24.749 "completed_nvme_io": 400, 00:09:24.749 "transports": [ 00:09:24.749 { 00:09:24.749 "trtype": "TCP" 00:09:24.749 } 00:09:24.749 ] 00:09:24.749 }, 00:09:24.749 { 00:09:24.749 "name": "nvmf_tgt_poll_group_1", 00:09:24.749 "admin_qpairs": 1, 00:09:24.749 "io_qpairs": 223, 00:09:24.749 "current_admin_qpairs": 0, 00:09:24.749 "current_io_qpairs": 0, 00:09:24.749 "pending_bdev_io": 0, 00:09:24.749 "completed_nvme_io": 226, 00:09:24.749 "transports": [ 00:09:24.749 { 00:09:24.749 "trtype": "TCP" 00:09:24.749 } 00:09:24.749 ] 00:09:24.749 }, 00:09:24.749 { 00:09:24.749 "name": "nvmf_tgt_poll_group_2", 00:09:24.749 "admin_qpairs": 6, 00:09:24.749 "io_qpairs": 218, 00:09:24.749 "current_admin_qpairs": 0, 00:09:24.749 "current_io_qpairs": 0, 00:09:24.749 "pending_bdev_io": 0, 00:09:24.749 "completed_nvme_io": 260, 00:09:24.749 "transports": [ 00:09:24.749 { 00:09:24.749 "trtype": "TCP" 00:09:24.749 } 00:09:24.749 ] 00:09:24.749 }, 00:09:24.749 { 00:09:24.749 "name": "nvmf_tgt_poll_group_3", 00:09:24.749 "admin_qpairs": 0, 00:09:24.749 "io_qpairs": 224, 00:09:24.749 "current_admin_qpairs": 0, 00:09:24.749 "current_io_qpairs": 0, 00:09:24.749 "pending_bdev_io": 0, 00:09:24.749 "completed_nvme_io": 353, 00:09:24.749 "transports": [ 00:09:24.749 { 00:09:24.749 "trtype": "TCP" 00:09:24.749 } 00:09:24.749 ] 00:09:24.749 } 00:09:24.749 ] 00:09:24.749 }' 00:09:24.749 23:53:54 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:24.749 23:53:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:24.749 23:53:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:24.749 23:53:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.749 23:53:54 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:24.749 23:53:54 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:24.749 23:53:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:24.749 23:53:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:24.749 23:53:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.749 23:53:54 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:24.749 23:53:54 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:24.749 23:53:54 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:24.749 23:53:54 -- target/rpc.sh@123 -- # nvmftestfini 00:09:24.750 23:53:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:24.750 23:53:54 -- nvmf/common.sh@117 -- # sync 00:09:24.750 23:53:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.750 23:53:54 -- nvmf/common.sh@120 -- # set +e 00:09:24.750 23:53:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.750 23:53:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.750 rmmod nvme_tcp 00:09:24.750 rmmod nvme_fabrics 00:09:24.750 rmmod nvme_keyring 00:09:24.750 23:53:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.750 23:53:54 -- nvmf/common.sh@124 -- # set -e 00:09:24.750 23:53:54 -- nvmf/common.sh@125 -- # return 0 00:09:24.750 23:53:54 -- nvmf/common.sh@478 -- # '[' -n 250325 ']' 00:09:24.750 23:53:54 -- nvmf/common.sh@479 -- # killprocess 250325 00:09:24.750 23:53:54 -- common/autotest_common.sh@936 -- # '[' -z 250325 ']' 00:09:24.750 23:53:54 -- common/autotest_common.sh@940 -- # kill -0 250325 00:09:24.750 23:53:54 -- common/autotest_common.sh@941 -- # uname 00:09:24.750 23:53:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:24.750 23:53:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 250325 00:09:25.010 23:53:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:25.010 23:53:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:25.010 23:53:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 250325' 00:09:25.010 killing process with pid 250325 00:09:25.010 23:53:55 -- common/autotest_common.sh@955 -- # kill 250325 00:09:25.010 23:53:55 -- common/autotest_common.sh@960 -- # wait 250325 00:09:25.010 23:53:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:25.010 23:53:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:25.010 23:53:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:25.010 23:53:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.010 23:53:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.010 23:53:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.010 23:53:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.011 23:53:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.556 23:53:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:27.556 00:09:27.556 real 0m37.615s 00:09:27.556 user 1m53.355s 00:09:27.556 sys 0m7.385s 00:09:27.556 23:53:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:27.556 23:53:57 -- common/autotest_common.sh@10 -- # set +x 00:09:27.556 ************************************ 00:09:27.556 END TEST nvmf_rpc 00:09:27.556 ************************************ 00:09:27.556 23:53:57 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:27.556 23:53:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:27.556 23:53:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.556 23:53:57 -- common/autotest_common.sh@10 -- # set +x 00:09:27.556 ************************************ 00:09:27.556 START TEST nvmf_invalid 00:09:27.556 ************************************ 00:09:27.556 23:53:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:27.556 * Looking for test storage... 00:09:27.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.556 23:53:57 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.556 23:53:57 -- nvmf/common.sh@7 -- # uname -s 00:09:27.556 23:53:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.556 23:53:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.556 23:53:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.556 23:53:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.556 23:53:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.556 23:53:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.556 23:53:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.556 23:53:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.556 23:53:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.556 23:53:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.556 23:53:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:27.556 23:53:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:27.556 23:53:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.556 23:53:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.556 23:53:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.556 23:53:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.556 23:53:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.556 23:53:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.556 23:53:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.556 23:53:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.556 23:53:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.556 23:53:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.556 23:53:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.556 23:53:57 -- paths/export.sh@5 -- # export PATH 00:09:27.556 23:53:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.556 23:53:57 -- nvmf/common.sh@47 -- # : 0 00:09:27.556 23:53:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.556 23:53:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.556 23:53:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.556 23:53:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.556 23:53:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.557 23:53:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.557 23:53:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.557 23:53:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.557 23:53:57 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:27.557 23:53:57 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:27.557 23:53:57 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:27.557 23:53:57 -- target/invalid.sh@14 -- # target=foobar 00:09:27.557 23:53:57 -- target/invalid.sh@16 -- # RANDOM=0 00:09:27.557 23:53:57 -- target/invalid.sh@34 -- # nvmftestinit 00:09:27.557 23:53:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:27.557 23:53:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.557 23:53:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:27.557 23:53:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:27.557 23:53:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:27.557 23:53:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.557 23:53:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.557 23:53:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.557 23:53:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:27.557 23:53:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:27.557 23:53:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.557 23:53:57 -- common/autotest_common.sh@10 -- # set +x 00:09:35.699 23:54:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:35.699 23:54:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.699 23:54:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.699 23:54:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.699 23:54:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.699 23:54:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.699 23:54:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.699 23:54:04 -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.699 23:54:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.699 23:54:04 -- nvmf/common.sh@296 -- # e810=() 00:09:35.699 23:54:04 -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.699 23:54:04 -- nvmf/common.sh@297 -- # x722=() 00:09:35.699 23:54:04 -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.699 23:54:04 -- nvmf/common.sh@298 -- # mlx=() 00:09:35.699 23:54:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.699 23:54:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.699 23:54:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.699 23:54:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.699 23:54:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.699 23:54:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.699 23:54:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.699 23:54:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.699 23:54:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.699 23:54:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.699 23:54:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.699 23:54:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.699 23:54:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.699 23:54:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:35.699 23:54:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.699 23:54:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.699 23:54:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:35.699 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:35.699 23:54:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.699 23:54:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:35.699 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:35.699 23:54:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.699 23:54:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.699 23:54:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.699 23:54:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:35.699 23:54:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.699 23:54:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:35.699 Found net devices under 0000:31:00.0: cvl_0_0 00:09:35.699 23:54:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.699 23:54:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.699 23:54:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.699 23:54:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:35.699 23:54:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.699 23:54:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:35.699 Found net devices under 0000:31:00.1: cvl_0_1 00:09:35.699 23:54:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.699 23:54:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:35.699 23:54:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:35.699 23:54:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:35.699 23:54:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.699 23:54:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.699 23:54:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.699 23:54:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:35.699 23:54:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.699 23:54:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.699 23:54:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:35.699 23:54:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.699 23:54:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.699 23:54:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:35.699 23:54:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:35.699 23:54:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.699 23:54:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.699 23:54:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.699 23:54:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.699 23:54:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:35.699 23:54:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.699 23:54:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.699 23:54:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.699 23:54:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:35.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:09:35.699 00:09:35.699 --- 10.0.0.2 ping statistics --- 00:09:35.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.699 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:09:35.699 23:54:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:09:35.699 00:09:35.699 --- 10.0.0.1 ping statistics --- 00:09:35.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.699 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:09:35.699 23:54:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.699 23:54:04 -- nvmf/common.sh@411 -- # return 0 00:09:35.699 23:54:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:35.699 23:54:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.699 23:54:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:35.699 23:54:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.699 23:54:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:35.699 23:54:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:35.699 23:54:05 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:35.699 23:54:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:35.699 23:54:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:35.699 23:54:05 -- common/autotest_common.sh@10 -- # set +x 00:09:35.699 23:54:05 -- nvmf/common.sh@470 -- # nvmfpid=260381 00:09:35.699 23:54:05 -- nvmf/common.sh@471 -- # waitforlisten 260381 00:09:35.699 23:54:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.699 23:54:05 -- common/autotest_common.sh@817 -- # '[' -z 260381 ']' 00:09:35.699 23:54:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.699 23:54:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:35.699 23:54:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.699 23:54:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:35.699 23:54:05 -- common/autotest_common.sh@10 -- # set +x 00:09:35.699 [2024-04-26 23:54:05.074236] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:09:35.699 [2024-04-26 23:54:05.074298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.699 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.699 [2024-04-26 23:54:05.148050] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.699 [2024-04-26 23:54:05.222421] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.699 [2024-04-26 23:54:05.222462] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.699 [2024-04-26 23:54:05.222470] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.699 [2024-04-26 23:54:05.222477] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.699 [2024-04-26 23:54:05.222483] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.700 [2024-04-26 23:54:05.222591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.700 [2024-04-26 23:54:05.222726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.700 [2024-04-26 23:54:05.222885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.700 [2024-04-26 23:54:05.222900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.700 23:54:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:35.700 23:54:05 -- common/autotest_common.sh@850 -- # return 0 00:09:35.700 23:54:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:35.700 23:54:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:35.700 23:54:05 -- common/autotest_common.sh@10 -- # set +x 00:09:35.700 23:54:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.700 23:54:05 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:35.700 23:54:05 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17283 00:09:35.961 [2024-04-26 23:54:06.041749] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:35.961 23:54:06 -- target/invalid.sh@40 -- # out='request: 00:09:35.961 { 00:09:35.961 "nqn": "nqn.2016-06.io.spdk:cnode17283", 00:09:35.961 "tgt_name": "foobar", 00:09:35.961 "method": "nvmf_create_subsystem", 00:09:35.961 "req_id": 1 00:09:35.961 } 00:09:35.961 Got JSON-RPC error response 00:09:35.961 response: 00:09:35.961 { 00:09:35.961 "code": -32603, 00:09:35.961 "message": "Unable to find target foobar" 00:09:35.961 }' 00:09:35.961 23:54:06 -- target/invalid.sh@41 -- # [[ request: 00:09:35.961 { 00:09:35.961 "nqn": "nqn.2016-06.io.spdk:cnode17283", 00:09:35.961 "tgt_name": "foobar", 00:09:35.961 "method": "nvmf_create_subsystem", 00:09:35.961 "req_id": 1 00:09:35.961 } 00:09:35.961 Got JSON-RPC error response 00:09:35.961 response: 00:09:35.961 { 00:09:35.961 "code": -32603, 00:09:35.961 "message": "Unable to find target foobar" 00:09:35.961 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:35.961 23:54:06 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:35.961 23:54:06 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12117 00:09:36.221 [2024-04-26 23:54:06.214352] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12117: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:36.221 23:54:06 -- target/invalid.sh@45 -- # out='request: 00:09:36.221 { 00:09:36.221 "nqn": "nqn.2016-06.io.spdk:cnode12117", 00:09:36.221 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:36.221 "method": "nvmf_create_subsystem", 00:09:36.221 "req_id": 1 00:09:36.221 } 00:09:36.221 Got JSON-RPC error response 00:09:36.221 response: 00:09:36.221 { 00:09:36.221 "code": -32602, 00:09:36.221 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:36.221 }' 00:09:36.221 23:54:06 -- target/invalid.sh@46 -- # [[ request: 00:09:36.221 { 00:09:36.221 "nqn": "nqn.2016-06.io.spdk:cnode12117", 00:09:36.221 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:36.221 "method": "nvmf_create_subsystem", 00:09:36.221 "req_id": 1 00:09:36.221 } 00:09:36.221 Got JSON-RPC error response 00:09:36.221 response: 00:09:36.221 { 00:09:36.221 "code": -32602, 00:09:36.221 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:36.221 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:36.221 23:54:06 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:36.221 23:54:06 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25885 00:09:36.221 [2024-04-26 23:54:06.378938] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25885: invalid model number 'SPDK_Controller' 00:09:36.221 23:54:06 -- target/invalid.sh@50 -- # out='request: 00:09:36.221 { 00:09:36.221 "nqn": "nqn.2016-06.io.spdk:cnode25885", 00:09:36.221 "model_number": "SPDK_Controller\u001f", 00:09:36.221 "method": "nvmf_create_subsystem", 00:09:36.221 "req_id": 1 00:09:36.221 } 00:09:36.221 Got JSON-RPC error response 00:09:36.221 response: 00:09:36.221 { 00:09:36.221 "code": -32602, 00:09:36.221 "message": "Invalid MN SPDK_Controller\u001f" 00:09:36.221 }' 00:09:36.221 23:54:06 -- target/invalid.sh@51 -- # [[ request: 00:09:36.221 { 00:09:36.221 "nqn": "nqn.2016-06.io.spdk:cnode25885", 00:09:36.221 "model_number": "SPDK_Controller\u001f", 00:09:36.221 "method": "nvmf_create_subsystem", 00:09:36.221 "req_id": 1 00:09:36.221 } 00:09:36.221 Got JSON-RPC error response 00:09:36.221 response: 00:09:36.221 { 00:09:36.221 "code": -32602, 00:09:36.221 "message": "Invalid MN SPDK_Controller\u001f" 00:09:36.221 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:36.221 23:54:06 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:36.221 23:54:06 -- target/invalid.sh@19 -- # local length=21 ll 00:09:36.221 23:54:06 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:36.221 23:54:06 -- target/invalid.sh@21 -- # local chars 00:09:36.221 23:54:06 -- target/invalid.sh@22 -- # local string 00:09:36.221 23:54:06 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:36.221 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.221 23:54:06 -- target/invalid.sh@25 -- # printf %x 108 00:09:36.221 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:36.221 23:54:06 -- target/invalid.sh@25 -- # string+=l 00:09:36.221 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.221 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.221 23:54:06 -- target/invalid.sh@25 -- # printf %x 37 00:09:36.221 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:36.221 23:54:06 -- target/invalid.sh@25 -- # string+=% 00:09:36.221 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.221 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.221 23:54:06 -- target/invalid.sh@25 -- # printf %x 57 00:09:36.221 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:36.221 23:54:06 -- target/invalid.sh@25 -- # string+=9 00:09:36.222 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.222 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.222 23:54:06 -- target/invalid.sh@25 -- # printf %x 83 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=S 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 41 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=')' 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 111 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=o 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 86 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=V 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 92 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+='\' 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 90 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=Z 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 88 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=X 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 75 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=K 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 45 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=- 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 103 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=g 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 68 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=D 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 113 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=q 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 110 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=n 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 124 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+='|' 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 114 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=r 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 62 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+='>' 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 87 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=W 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # printf %x 108 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:36.485 23:54:06 -- target/invalid.sh@25 -- # string+=l 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.485 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.485 23:54:06 -- target/invalid.sh@28 -- # [[ l == \- ]] 00:09:36.485 23:54:06 -- target/invalid.sh@31 -- # echo 'l%9S)oV\ZXK-gDqn|r>Wl' 00:09:36.485 23:54:06 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'l%9S)oV\ZXK-gDqn|r>Wl' nqn.2016-06.io.spdk:cnode26494 00:09:36.747 [2024-04-26 23:54:06.711968] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26494: invalid serial number 'l%9S)oV\ZXK-gDqn|r>Wl' 00:09:36.747 23:54:06 -- target/invalid.sh@54 -- # out='request: 00:09:36.747 { 00:09:36.747 "nqn": "nqn.2016-06.io.spdk:cnode26494", 00:09:36.747 "serial_number": "l%9S)oV\\ZXK-gDqn|r>Wl", 00:09:36.747 "method": "nvmf_create_subsystem", 00:09:36.747 "req_id": 1 00:09:36.747 } 00:09:36.747 Got JSON-RPC error response 00:09:36.747 response: 00:09:36.747 { 00:09:36.747 "code": -32602, 00:09:36.747 "message": "Invalid SN l%9S)oV\\ZXK-gDqn|r>Wl" 00:09:36.747 }' 00:09:36.747 23:54:06 -- target/invalid.sh@55 -- # [[ request: 00:09:36.747 { 00:09:36.747 "nqn": "nqn.2016-06.io.spdk:cnode26494", 00:09:36.747 "serial_number": "l%9S)oV\\ZXK-gDqn|r>Wl", 00:09:36.747 "method": "nvmf_create_subsystem", 00:09:36.747 "req_id": 1 00:09:36.747 } 00:09:36.747 Got JSON-RPC error response 00:09:36.747 response: 00:09:36.747 { 00:09:36.747 "code": -32602, 00:09:36.747 "message": "Invalid SN l%9S)oV\\ZXK-gDqn|r>Wl" 00:09:36.747 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:36.747 23:54:06 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:36.747 23:54:06 -- target/invalid.sh@19 -- # local length=41 ll 00:09:36.748 23:54:06 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:36.748 23:54:06 -- target/invalid.sh@21 -- # local chars 00:09:36.748 23:54:06 -- target/invalid.sh@22 -- # local string 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 36 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+='$' 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 127 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=$'\177' 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 107 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=k 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 53 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=5 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 45 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=- 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 86 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=V 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 126 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+='~' 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 78 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=N 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 35 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+='#' 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 50 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=2 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 33 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+='!' 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 38 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+='&' 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 93 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=']' 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 118 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=v 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 42 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+='*' 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 66 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=B 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 46 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=. 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 59 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=';' 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 64 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=@ 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 80 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=P 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 53 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=5 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 85 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=U 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 37 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=% 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 80 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=P 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 79 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=O 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 121 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+=y 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.748 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # printf %x 94 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:36.748 23:54:06 -- target/invalid.sh@25 -- # string+='^' 00:09:36.749 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.749 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.749 23:54:06 -- target/invalid.sh@25 -- # printf %x 52 00:09:36.749 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:36.749 23:54:06 -- target/invalid.sh@25 -- # string+=4 00:09:36.749 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.749 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.749 23:54:06 -- target/invalid.sh@25 -- # printf %x 53 00:09:36.749 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:36.749 23:54:06 -- target/invalid.sh@25 -- # string+=5 00:09:36.749 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.749 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.749 23:54:06 -- target/invalid.sh@25 -- # printf %x 96 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # string+='`' 00:09:37.009 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.009 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # printf %x 123 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # string+='{' 00:09:37.009 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.009 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # printf %x 79 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # string+=O 00:09:37.009 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.009 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # printf %x 57 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # string+=9 00:09:37.009 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.009 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # printf %x 47 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:37.009 23:54:06 -- target/invalid.sh@25 -- # string+=/ 00:09:37.009 23:54:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.009 23:54:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.009 23:54:07 -- target/invalid.sh@25 -- # printf %x 71 00:09:37.009 23:54:07 -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:37.009 23:54:07 -- target/invalid.sh@25 -- # string+=G 00:09:37.009 23:54:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.009 23:54:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.009 23:54:07 -- target/invalid.sh@25 -- # printf %x 100 00:09:37.009 23:54:07 -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:37.009 23:54:07 -- target/invalid.sh@25 -- # string+=d 00:09:37.009 23:54:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.009 23:54:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.009 23:54:07 -- target/invalid.sh@25 -- # printf %x 110 00:09:37.009 23:54:07 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:37.009 23:54:07 -- target/invalid.sh@25 -- # string+=n 00:09:37.009 23:54:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.009 23:54:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # printf %x 35 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # string+='#' 00:09:37.010 23:54:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.010 23:54:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # printf %x 48 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # string+=0 00:09:37.010 23:54:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.010 23:54:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # printf %x 119 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # string+=w 00:09:37.010 23:54:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.010 23:54:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # printf %x 72 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:37.010 23:54:07 -- target/invalid.sh@25 -- # string+=H 00:09:37.010 23:54:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:37.010 23:54:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:37.010 23:54:07 -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:09:37.010 23:54:07 -- target/invalid.sh@31 -- # echo '$k5-V~N#2!&]v*B.;@P5U%POy^45`{O9/Gdn#0wH' 00:09:37.010 23:54:07 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '$k5-V~N#2!&]v*B.;@P5U%POy^45`{O9/Gdn#0wH' nqn.2016-06.io.spdk:cnode27147 00:09:37.010 [2024-04-26 23:54:07.193558] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27147: invalid model number '$k5-V~N#2!&]v*B.;@P5U%POy^45`{O9/Gdn#0wH' 00:09:37.010 23:54:07 -- target/invalid.sh@58 -- # out='request: 00:09:37.010 { 00:09:37.010 "nqn": "nqn.2016-06.io.spdk:cnode27147", 00:09:37.010 "model_number": "$\u007fk5-V~N#2!&]v*B.;@P5U%POy^45`{O9/Gdn#0wH", 00:09:37.010 "method": "nvmf_create_subsystem", 00:09:37.010 "req_id": 1 00:09:37.010 } 00:09:37.010 Got JSON-RPC error response 00:09:37.010 response: 00:09:37.010 { 00:09:37.010 "code": -32602, 00:09:37.010 "message": "Invalid MN $\u007fk5-V~N#2!&]v*B.;@P5U%POy^45`{O9/Gdn#0wH" 00:09:37.010 }' 00:09:37.010 23:54:07 -- target/invalid.sh@59 -- # [[ request: 00:09:37.010 { 00:09:37.010 "nqn": "nqn.2016-06.io.spdk:cnode27147", 00:09:37.010 "model_number": "$\u007fk5-V~N#2!&]v*B.;@P5U%POy^45`{O9/Gdn#0wH", 00:09:37.010 "method": "nvmf_create_subsystem", 00:09:37.010 "req_id": 1 00:09:37.010 } 00:09:37.010 Got JSON-RPC error response 00:09:37.010 response: 00:09:37.010 { 00:09:37.010 "code": -32602, 00:09:37.010 "message": "Invalid MN $\u007fk5-V~N#2!&]v*B.;@P5U%POy^45`{O9/Gdn#0wH" 00:09:37.010 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:37.010 23:54:07 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:37.271 [2024-04-26 23:54:07.362150] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.271 23:54:07 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:37.531 23:54:07 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:37.531 23:54:07 -- target/invalid.sh@67 -- # echo '' 00:09:37.531 23:54:07 -- target/invalid.sh@67 -- # head -n 1 00:09:37.531 23:54:07 -- target/invalid.sh@67 -- # IP= 00:09:37.531 23:54:07 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:37.531 [2024-04-26 23:54:07.708687] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:37.531 23:54:07 -- target/invalid.sh@69 -- # out='request: 00:09:37.531 { 00:09:37.531 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:37.531 "listen_address": { 00:09:37.531 "trtype": "tcp", 00:09:37.531 "traddr": "", 00:09:37.531 "trsvcid": "4421" 00:09:37.531 }, 00:09:37.531 "method": "nvmf_subsystem_remove_listener", 00:09:37.531 "req_id": 1 00:09:37.531 } 00:09:37.531 Got JSON-RPC error response 00:09:37.531 response: 00:09:37.531 { 00:09:37.531 "code": -32602, 00:09:37.531 "message": "Invalid parameters" 00:09:37.531 }' 00:09:37.531 23:54:07 -- target/invalid.sh@70 -- # [[ request: 00:09:37.531 { 00:09:37.531 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:37.531 "listen_address": { 00:09:37.531 "trtype": "tcp", 00:09:37.531 "traddr": "", 00:09:37.531 "trsvcid": "4421" 00:09:37.531 }, 00:09:37.531 "method": "nvmf_subsystem_remove_listener", 00:09:37.531 "req_id": 1 00:09:37.531 } 00:09:37.531 Got JSON-RPC error response 00:09:37.531 response: 00:09:37.531 { 00:09:37.531 "code": -32602, 00:09:37.531 "message": "Invalid parameters" 00:09:37.531 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:37.531 23:54:07 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27725 -i 0 00:09:37.791 [2024-04-26 23:54:07.869158] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27725: invalid cntlid range [0-65519] 00:09:37.791 23:54:07 -- target/invalid.sh@73 -- # out='request: 00:09:37.791 { 00:09:37.791 "nqn": "nqn.2016-06.io.spdk:cnode27725", 00:09:37.791 "min_cntlid": 0, 00:09:37.791 "method": "nvmf_create_subsystem", 00:09:37.791 "req_id": 1 00:09:37.791 } 00:09:37.791 Got JSON-RPC error response 00:09:37.791 response: 00:09:37.791 { 00:09:37.791 "code": -32602, 00:09:37.791 "message": "Invalid cntlid range [0-65519]" 00:09:37.791 }' 00:09:37.791 23:54:07 -- target/invalid.sh@74 -- # [[ request: 00:09:37.791 { 00:09:37.791 "nqn": "nqn.2016-06.io.spdk:cnode27725", 00:09:37.791 "min_cntlid": 0, 00:09:37.791 "method": "nvmf_create_subsystem", 00:09:37.791 "req_id": 1 00:09:37.791 } 00:09:37.791 Got JSON-RPC error response 00:09:37.791 response: 00:09:37.791 { 00:09:37.791 "code": -32602, 00:09:37.791 "message": "Invalid cntlid range [0-65519]" 00:09:37.791 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.791 23:54:07 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31927 -i 65520 00:09:38.051 [2024-04-26 23:54:08.033691] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31927: invalid cntlid range [65520-65519] 00:09:38.051 23:54:08 -- target/invalid.sh@75 -- # out='request: 00:09:38.051 { 00:09:38.051 "nqn": "nqn.2016-06.io.spdk:cnode31927", 00:09:38.051 "min_cntlid": 65520, 00:09:38.051 "method": "nvmf_create_subsystem", 00:09:38.051 "req_id": 1 00:09:38.051 } 00:09:38.051 Got JSON-RPC error response 00:09:38.051 response: 00:09:38.051 { 00:09:38.051 "code": -32602, 00:09:38.051 "message": "Invalid cntlid range [65520-65519]" 00:09:38.051 }' 00:09:38.051 23:54:08 -- target/invalid.sh@76 -- # [[ request: 00:09:38.051 { 00:09:38.051 "nqn": "nqn.2016-06.io.spdk:cnode31927", 00:09:38.051 "min_cntlid": 65520, 00:09:38.051 "method": "nvmf_create_subsystem", 00:09:38.051 "req_id": 1 00:09:38.051 } 00:09:38.051 Got JSON-RPC error response 00:09:38.051 response: 00:09:38.051 { 00:09:38.051 "code": -32602, 00:09:38.051 "message": "Invalid cntlid range [65520-65519]" 00:09:38.051 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:38.051 23:54:08 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28934 -I 0 00:09:38.051 [2024-04-26 23:54:08.198201] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28934: invalid cntlid range [1-0] 00:09:38.051 23:54:08 -- target/invalid.sh@77 -- # out='request: 00:09:38.051 { 00:09:38.051 "nqn": "nqn.2016-06.io.spdk:cnode28934", 00:09:38.051 "max_cntlid": 0, 00:09:38.051 "method": "nvmf_create_subsystem", 00:09:38.051 "req_id": 1 00:09:38.051 } 00:09:38.051 Got JSON-RPC error response 00:09:38.051 response: 00:09:38.051 { 00:09:38.051 "code": -32602, 00:09:38.051 "message": "Invalid cntlid range [1-0]" 00:09:38.051 }' 00:09:38.051 23:54:08 -- target/invalid.sh@78 -- # [[ request: 00:09:38.051 { 00:09:38.051 "nqn": "nqn.2016-06.io.spdk:cnode28934", 00:09:38.051 "max_cntlid": 0, 00:09:38.051 "method": "nvmf_create_subsystem", 00:09:38.051 "req_id": 1 00:09:38.051 } 00:09:38.051 Got JSON-RPC error response 00:09:38.051 response: 00:09:38.051 { 00:09:38.051 "code": -32602, 00:09:38.051 "message": "Invalid cntlid range [1-0]" 00:09:38.051 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:38.051 23:54:08 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31245 -I 65520 00:09:38.312 [2024-04-26 23:54:08.366804] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31245: invalid cntlid range [1-65520] 00:09:38.312 23:54:08 -- target/invalid.sh@79 -- # out='request: 00:09:38.312 { 00:09:38.312 "nqn": "nqn.2016-06.io.spdk:cnode31245", 00:09:38.312 "max_cntlid": 65520, 00:09:38.312 "method": "nvmf_create_subsystem", 00:09:38.312 "req_id": 1 00:09:38.312 } 00:09:38.312 Got JSON-RPC error response 00:09:38.312 response: 00:09:38.312 { 00:09:38.312 "code": -32602, 00:09:38.312 "message": "Invalid cntlid range [1-65520]" 00:09:38.312 }' 00:09:38.312 23:54:08 -- target/invalid.sh@80 -- # [[ request: 00:09:38.312 { 00:09:38.312 "nqn": "nqn.2016-06.io.spdk:cnode31245", 00:09:38.312 "max_cntlid": 65520, 00:09:38.312 "method": "nvmf_create_subsystem", 00:09:38.312 "req_id": 1 00:09:38.312 } 00:09:38.312 Got JSON-RPC error response 00:09:38.312 response: 00:09:38.312 { 00:09:38.312 "code": -32602, 00:09:38.312 "message": "Invalid cntlid range [1-65520]" 00:09:38.312 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:38.312 23:54:08 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19879 -i 6 -I 5 00:09:38.572 [2024-04-26 23:54:08.539318] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19879: invalid cntlid range [6-5] 00:09:38.572 23:54:08 -- target/invalid.sh@83 -- # out='request: 00:09:38.572 { 00:09:38.572 "nqn": "nqn.2016-06.io.spdk:cnode19879", 00:09:38.572 "min_cntlid": 6, 00:09:38.572 "max_cntlid": 5, 00:09:38.572 "method": "nvmf_create_subsystem", 00:09:38.572 "req_id": 1 00:09:38.572 } 00:09:38.572 Got JSON-RPC error response 00:09:38.572 response: 00:09:38.572 { 00:09:38.572 "code": -32602, 00:09:38.572 "message": "Invalid cntlid range [6-5]" 00:09:38.572 }' 00:09:38.572 23:54:08 -- target/invalid.sh@84 -- # [[ request: 00:09:38.572 { 00:09:38.572 "nqn": "nqn.2016-06.io.spdk:cnode19879", 00:09:38.572 "min_cntlid": 6, 00:09:38.572 "max_cntlid": 5, 00:09:38.572 "method": "nvmf_create_subsystem", 00:09:38.572 "req_id": 1 00:09:38.572 } 00:09:38.572 Got JSON-RPC error response 00:09:38.572 response: 00:09:38.572 { 00:09:38.572 "code": -32602, 00:09:38.572 "message": "Invalid cntlid range [6-5]" 00:09:38.572 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:38.572 23:54:08 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:38.572 23:54:08 -- target/invalid.sh@87 -- # out='request: 00:09:38.572 { 00:09:38.572 "name": "foobar", 00:09:38.572 "method": "nvmf_delete_target", 00:09:38.572 "req_id": 1 00:09:38.572 } 00:09:38.572 Got JSON-RPC error response 00:09:38.572 response: 00:09:38.572 { 00:09:38.572 "code": -32602, 00:09:38.572 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:38.572 }' 00:09:38.572 23:54:08 -- target/invalid.sh@88 -- # [[ request: 00:09:38.572 { 00:09:38.572 "name": "foobar", 00:09:38.572 "method": "nvmf_delete_target", 00:09:38.572 "req_id": 1 00:09:38.572 } 00:09:38.572 Got JSON-RPC error response 00:09:38.572 response: 00:09:38.572 { 00:09:38.572 "code": -32602, 00:09:38.572 "message": "The specified target doesn't exist, cannot delete it." 00:09:38.572 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:38.572 23:54:08 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:38.572 23:54:08 -- target/invalid.sh@91 -- # nvmftestfini 00:09:38.572 23:54:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:38.572 23:54:08 -- nvmf/common.sh@117 -- # sync 00:09:38.572 23:54:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.572 23:54:08 -- nvmf/common.sh@120 -- # set +e 00:09:38.572 23:54:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.572 23:54:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.572 rmmod nvme_tcp 00:09:38.572 rmmod nvme_fabrics 00:09:38.572 rmmod nvme_keyring 00:09:38.572 23:54:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.572 23:54:08 -- nvmf/common.sh@124 -- # set -e 00:09:38.572 23:54:08 -- nvmf/common.sh@125 -- # return 0 00:09:38.572 23:54:08 -- nvmf/common.sh@478 -- # '[' -n 260381 ']' 00:09:38.572 23:54:08 -- nvmf/common.sh@479 -- # killprocess 260381 00:09:38.572 23:54:08 -- common/autotest_common.sh@936 -- # '[' -z 260381 ']' 00:09:38.572 23:54:08 -- common/autotest_common.sh@940 -- # kill -0 260381 00:09:38.572 23:54:08 -- common/autotest_common.sh@941 -- # uname 00:09:38.572 23:54:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:38.572 23:54:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 260381 00:09:38.832 23:54:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:38.832 23:54:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:38.832 23:54:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 260381' 00:09:38.832 killing process with pid 260381 00:09:38.832 23:54:08 -- common/autotest_common.sh@955 -- # kill 260381 00:09:38.832 23:54:08 -- common/autotest_common.sh@960 -- # wait 260381 00:09:38.832 23:54:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:38.832 23:54:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:38.832 23:54:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:38.832 23:54:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.832 23:54:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.832 23:54:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.832 23:54:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.832 23:54:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.373 23:54:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:41.373 00:09:41.373 real 0m13.571s 00:09:41.373 user 0m19.094s 00:09:41.373 sys 0m6.387s 00:09:41.373 23:54:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:41.373 23:54:11 -- common/autotest_common.sh@10 -- # set +x 00:09:41.373 ************************************ 00:09:41.373 END TEST nvmf_invalid 00:09:41.373 ************************************ 00:09:41.373 23:54:11 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:41.373 23:54:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:41.373 23:54:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.373 23:54:11 -- common/autotest_common.sh@10 -- # set +x 00:09:41.373 ************************************ 00:09:41.373 START TEST nvmf_abort 00:09:41.373 ************************************ 00:09:41.374 23:54:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:41.374 * Looking for test storage... 00:09:41.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.374 23:54:11 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.374 23:54:11 -- nvmf/common.sh@7 -- # uname -s 00:09:41.374 23:54:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.374 23:54:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.374 23:54:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.374 23:54:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.374 23:54:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.374 23:54:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.374 23:54:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.374 23:54:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.374 23:54:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.374 23:54:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.374 23:54:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:41.374 23:54:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:41.374 23:54:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.374 23:54:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.374 23:54:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.374 23:54:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.374 23:54:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.374 23:54:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.374 23:54:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.374 23:54:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.374 23:54:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.374 23:54:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.374 23:54:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.374 23:54:11 -- paths/export.sh@5 -- # export PATH 00:09:41.374 23:54:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.374 23:54:11 -- nvmf/common.sh@47 -- # : 0 00:09:41.374 23:54:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.374 23:54:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.374 23:54:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.374 23:54:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.374 23:54:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.374 23:54:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.374 23:54:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.374 23:54:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.374 23:54:11 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.374 23:54:11 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:41.374 23:54:11 -- target/abort.sh@14 -- # nvmftestinit 00:09:41.374 23:54:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:41.374 23:54:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.374 23:54:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:41.374 23:54:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:41.374 23:54:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:41.374 23:54:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.374 23:54:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.374 23:54:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.374 23:54:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:41.374 23:54:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:41.374 23:54:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:41.374 23:54:11 -- common/autotest_common.sh@10 -- # set +x 00:09:49.509 23:54:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:49.509 23:54:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.509 23:54:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.509 23:54:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.509 23:54:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.509 23:54:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.509 23:54:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.509 23:54:18 -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.509 23:54:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.509 23:54:18 -- nvmf/common.sh@296 -- # e810=() 00:09:49.509 23:54:18 -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.509 23:54:18 -- nvmf/common.sh@297 -- # x722=() 00:09:49.509 23:54:18 -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.509 23:54:18 -- nvmf/common.sh@298 -- # mlx=() 00:09:49.509 23:54:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.509 23:54:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.509 23:54:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.509 23:54:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.509 23:54:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.509 23:54:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.509 23:54:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.509 23:54:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.509 23:54:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.509 23:54:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.509 23:54:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.509 23:54:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.509 23:54:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.509 23:54:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:49.509 23:54:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.509 23:54:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.509 23:54:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:49.509 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:49.509 23:54:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.509 23:54:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:49.509 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:49.509 23:54:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.509 23:54:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.509 23:54:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.509 23:54:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:49.509 23:54:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.509 23:54:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:49.509 Found net devices under 0000:31:00.0: cvl_0_0 00:09:49.509 23:54:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.509 23:54:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.509 23:54:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.509 23:54:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:49.509 23:54:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.509 23:54:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:49.509 Found net devices under 0000:31:00.1: cvl_0_1 00:09:49.509 23:54:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.509 23:54:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:49.509 23:54:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:49.509 23:54:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:49.509 23:54:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.509 23:54:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.509 23:54:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.509 23:54:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:49.509 23:54:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.509 23:54:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.509 23:54:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:49.509 23:54:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.509 23:54:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.509 23:54:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:49.509 23:54:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:49.509 23:54:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.509 23:54:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.509 23:54:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.509 23:54:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.509 23:54:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:49.509 23:54:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.509 23:54:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.509 23:54:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.509 23:54:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:49.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:09:49.509 00:09:49.509 --- 10.0.0.2 ping statistics --- 00:09:49.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.509 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:09:49.509 23:54:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:09:49.509 00:09:49.509 --- 10.0.0.1 ping statistics --- 00:09:49.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.509 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:09:49.509 23:54:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.509 23:54:18 -- nvmf/common.sh@411 -- # return 0 00:09:49.509 23:54:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:49.509 23:54:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.509 23:54:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:49.509 23:54:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.509 23:54:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:49.509 23:54:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:49.509 23:54:18 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:49.509 23:54:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:49.509 23:54:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:49.509 23:54:18 -- common/autotest_common.sh@10 -- # set +x 00:09:49.509 23:54:18 -- nvmf/common.sh@470 -- # nvmfpid=266064 00:09:49.509 23:54:18 -- nvmf/common.sh@471 -- # waitforlisten 266064 00:09:49.509 23:54:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:49.509 23:54:18 -- common/autotest_common.sh@817 -- # '[' -z 266064 ']' 00:09:49.509 23:54:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.509 23:54:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:49.509 23:54:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.509 23:54:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:49.510 23:54:18 -- common/autotest_common.sh@10 -- # set +x 00:09:49.510 [2024-04-26 23:54:18.638956] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:09:49.510 [2024-04-26 23:54:18.639004] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.510 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.510 [2024-04-26 23:54:18.705134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.510 [2024-04-26 23:54:18.769496] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.510 [2024-04-26 23:54:18.769529] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.510 [2024-04-26 23:54:18.769540] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.510 [2024-04-26 23:54:18.769546] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.510 [2024-04-26 23:54:18.769552] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.510 [2024-04-26 23:54:18.769665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.510 [2024-04-26 23:54:18.769819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.510 [2024-04-26 23:54:18.769820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.510 23:54:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:49.510 23:54:19 -- common/autotest_common.sh@850 -- # return 0 00:09:49.510 23:54:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:49.510 23:54:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:49.510 23:54:19 -- common/autotest_common.sh@10 -- # set +x 00:09:49.510 23:54:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.510 23:54:19 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:49.510 23:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.510 23:54:19 -- common/autotest_common.sh@10 -- # set +x 00:09:49.510 [2024-04-26 23:54:19.461961] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.510 23:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.510 23:54:19 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:49.510 23:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.510 23:54:19 -- common/autotest_common.sh@10 -- # set +x 00:09:49.510 Malloc0 00:09:49.510 23:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.510 23:54:19 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:49.510 23:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.510 23:54:19 -- common/autotest_common.sh@10 -- # set +x 00:09:49.510 Delay0 00:09:49.510 23:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.510 23:54:19 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:49.510 23:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.510 23:54:19 -- common/autotest_common.sh@10 -- # set +x 00:09:49.510 23:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.510 23:54:19 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:49.510 23:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.510 23:54:19 -- common/autotest_common.sh@10 -- # set +x 00:09:49.510 23:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.510 23:54:19 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:49.510 23:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.510 23:54:19 -- common/autotest_common.sh@10 -- # set +x 00:09:49.510 [2024-04-26 23:54:19.538303] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.510 23:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.510 23:54:19 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:49.510 23:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.510 23:54:19 -- common/autotest_common.sh@10 -- # set +x 00:09:49.510 23:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.510 23:54:19 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:49.510 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.510 [2024-04-26 23:54:19.659528] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:52.057 [2024-04-26 23:54:21.816043] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2538ba0 is same with the state(5) to be set 00:09:52.057 Initializing NVMe Controllers 00:09:52.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:52.057 controller IO queue size 128 less than required 00:09:52.057 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:52.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:52.057 Initialization complete. Launching workers. 00:09:52.057 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 31389 00:09:52.057 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31452, failed to submit 62 00:09:52.057 success 31393, unsuccess 59, failed 0 00:09:52.057 23:54:21 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:52.057 23:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:52.057 23:54:21 -- common/autotest_common.sh@10 -- # set +x 00:09:52.057 23:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:52.057 23:54:21 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:52.057 23:54:21 -- target/abort.sh@38 -- # nvmftestfini 00:09:52.057 23:54:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:52.057 23:54:21 -- nvmf/common.sh@117 -- # sync 00:09:52.057 23:54:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.057 23:54:21 -- nvmf/common.sh@120 -- # set +e 00:09:52.057 23:54:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.057 23:54:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.057 rmmod nvme_tcp 00:09:52.057 rmmod nvme_fabrics 00:09:52.057 rmmod nvme_keyring 00:09:52.057 23:54:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.057 23:54:21 -- nvmf/common.sh@124 -- # set -e 00:09:52.057 23:54:21 -- nvmf/common.sh@125 -- # return 0 00:09:52.057 23:54:21 -- nvmf/common.sh@478 -- # '[' -n 266064 ']' 00:09:52.057 23:54:21 -- nvmf/common.sh@479 -- # killprocess 266064 00:09:52.057 23:54:21 -- common/autotest_common.sh@936 -- # '[' -z 266064 ']' 00:09:52.057 23:54:21 -- common/autotest_common.sh@940 -- # kill -0 266064 00:09:52.057 23:54:21 -- common/autotest_common.sh@941 -- # uname 00:09:52.057 23:54:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:52.057 23:54:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 266064 00:09:52.057 23:54:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:52.058 23:54:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:52.058 23:54:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 266064' 00:09:52.058 killing process with pid 266064 00:09:52.058 23:54:21 -- common/autotest_common.sh@955 -- # kill 266064 00:09:52.058 23:54:21 -- common/autotest_common.sh@960 -- # wait 266064 00:09:52.058 23:54:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:52.058 23:54:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:52.058 23:54:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:52.058 23:54:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.058 23:54:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.058 23:54:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.058 23:54:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.058 23:54:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.971 23:54:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:53.971 00:09:53.971 real 0m12.984s 00:09:53.971 user 0m13.835s 00:09:53.971 sys 0m6.282s 00:09:54.231 23:54:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:54.231 23:54:24 -- common/autotest_common.sh@10 -- # set +x 00:09:54.231 ************************************ 00:09:54.231 END TEST nvmf_abort 00:09:54.231 ************************************ 00:09:54.231 23:54:24 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:54.231 23:54:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:54.231 23:54:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.231 23:54:24 -- common/autotest_common.sh@10 -- # set +x 00:09:54.231 ************************************ 00:09:54.231 START TEST nvmf_ns_hotplug_stress 00:09:54.231 ************************************ 00:09:54.231 23:54:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:54.491 * Looking for test storage... 00:09:54.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.491 23:54:24 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.491 23:54:24 -- nvmf/common.sh@7 -- # uname -s 00:09:54.491 23:54:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.491 23:54:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.491 23:54:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.491 23:54:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.491 23:54:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.491 23:54:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.491 23:54:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.491 23:54:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.491 23:54:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.491 23:54:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.491 23:54:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:54.491 23:54:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:54.491 23:54:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.491 23:54:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.491 23:54:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.491 23:54:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.491 23:54:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.491 23:54:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.491 23:54:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.491 23:54:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.491 23:54:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.491 23:54:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.491 23:54:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.491 23:54:24 -- paths/export.sh@5 -- # export PATH 00:09:54.491 23:54:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.491 23:54:24 -- nvmf/common.sh@47 -- # : 0 00:09:54.491 23:54:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:54.491 23:54:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:54.491 23:54:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.491 23:54:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.491 23:54:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.491 23:54:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:54.491 23:54:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:54.491 23:54:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:54.491 23:54:24 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:54.491 23:54:24 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:09:54.491 23:54:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:54.491 23:54:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.491 23:54:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:54.491 23:54:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:54.491 23:54:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:54.491 23:54:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.491 23:54:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.491 23:54:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.491 23:54:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:54.491 23:54:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:54.491 23:54:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:54.491 23:54:24 -- common/autotest_common.sh@10 -- # set +x 00:10:02.723 23:54:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:02.723 23:54:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:02.723 23:54:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:02.723 23:54:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:02.723 23:54:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:02.723 23:54:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:02.723 23:54:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:02.723 23:54:31 -- nvmf/common.sh@295 -- # net_devs=() 00:10:02.723 23:54:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:02.723 23:54:31 -- nvmf/common.sh@296 -- # e810=() 00:10:02.723 23:54:31 -- nvmf/common.sh@296 -- # local -ga e810 00:10:02.723 23:54:31 -- nvmf/common.sh@297 -- # x722=() 00:10:02.723 23:54:31 -- nvmf/common.sh@297 -- # local -ga x722 00:10:02.723 23:54:31 -- nvmf/common.sh@298 -- # mlx=() 00:10:02.723 23:54:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:02.723 23:54:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.723 23:54:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.723 23:54:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.723 23:54:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.723 23:54:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.723 23:54:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.723 23:54:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.723 23:54:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.723 23:54:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.723 23:54:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.723 23:54:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.723 23:54:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:02.723 23:54:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:02.723 23:54:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:02.723 23:54:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.723 23:54:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:02.723 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:02.723 23:54:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.723 23:54:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:02.723 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:02.723 23:54:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:02.723 23:54:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.723 23:54:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.723 23:54:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:02.723 23:54:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.723 23:54:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:02.723 Found net devices under 0000:31:00.0: cvl_0_0 00:10:02.723 23:54:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.723 23:54:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.723 23:54:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.723 23:54:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:02.723 23:54:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.723 23:54:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:02.723 Found net devices under 0000:31:00.1: cvl_0_1 00:10:02.723 23:54:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.723 23:54:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:02.723 23:54:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:02.723 23:54:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:02.723 23:54:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.723 23:54:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.723 23:54:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.723 23:54:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:02.723 23:54:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.723 23:54:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.723 23:54:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:02.723 23:54:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.723 23:54:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.723 23:54:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:02.723 23:54:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:02.723 23:54:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.723 23:54:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.723 23:54:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.723 23:54:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.723 23:54:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:02.723 23:54:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.723 23:54:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.723 23:54:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.723 23:54:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:02.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.728 ms 00:10:02.723 00:10:02.723 --- 10.0.0.2 ping statistics --- 00:10:02.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.723 rtt min/avg/max/mdev = 0.728/0.728/0.728/0.000 ms 00:10:02.723 23:54:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:10:02.723 00:10:02.723 --- 10.0.0.1 ping statistics --- 00:10:02.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.723 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:10:02.723 23:54:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.723 23:54:31 -- nvmf/common.sh@411 -- # return 0 00:10:02.723 23:54:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:02.723 23:54:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.723 23:54:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:02.723 23:54:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.723 23:54:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:02.723 23:54:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:02.723 23:54:31 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:10:02.723 23:54:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:02.723 23:54:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:02.723 23:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:02.723 23:54:31 -- nvmf/common.sh@470 -- # nvmfpid=271155 00:10:02.723 23:54:31 -- nvmf/common.sh@471 -- # waitforlisten 271155 00:10:02.723 23:54:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:02.723 23:54:31 -- common/autotest_common.sh@817 -- # '[' -z 271155 ']' 00:10:02.723 23:54:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.723 23:54:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:02.723 23:54:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.723 23:54:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:02.724 23:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:02.724 [2024-04-26 23:54:31.861549] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:10:02.724 [2024-04-26 23:54:31.861595] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.724 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.724 [2024-04-26 23:54:31.929125] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:02.724 [2024-04-26 23:54:31.992412] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.724 [2024-04-26 23:54:31.992451] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.724 [2024-04-26 23:54:31.992458] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.724 [2024-04-26 23:54:31.992464] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.724 [2024-04-26 23:54:31.992470] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.724 [2024-04-26 23:54:31.992574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.724 [2024-04-26 23:54:31.992727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.724 [2024-04-26 23:54:31.992728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.724 23:54:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:02.724 23:54:32 -- common/autotest_common.sh@850 -- # return 0 00:10:02.724 23:54:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:02.724 23:54:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:02.724 23:54:32 -- common/autotest_common.sh@10 -- # set +x 00:10:02.724 23:54:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.724 23:54:32 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:10:02.724 23:54:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:02.724 [2024-04-26 23:54:32.808535] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.724 23:54:32 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:02.985 23:54:33 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.985 [2024-04-26 23:54:33.146017] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.985 23:54:33 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:03.247 23:54:33 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:03.507 Malloc0 00:10:03.507 23:54:33 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:03.507 Delay0 00:10:03.507 23:54:33 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.768 23:54:33 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:03.768 NULL1 00:10:04.029 23:54:34 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:04.029 23:54:34 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:04.029 23:54:34 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=271525 00:10:04.029 23:54:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:04.029 23:54:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.029 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.290 23:54:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.551 23:54:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:10:04.551 23:54:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:04.551 [2024-04-26 23:54:34.661205] bdev.c:4975:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:04.551 true 00:10:04.551 23:54:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:04.551 23:54:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.813 23:54:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.074 23:54:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:10:05.074 23:54:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:05.074 true 00:10:05.074 23:54:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:05.074 23:54:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.336 23:54:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.336 23:54:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:10:05.336 23:54:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:05.597 true 00:10:05.597 23:54:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:05.597 23:54:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.858 23:54:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.858 23:54:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:10:05.858 23:54:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:06.120 true 00:10:06.120 23:54:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:06.120 23:54:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.380 23:54:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.380 23:54:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:10:06.380 23:54:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:06.642 true 00:10:06.642 23:54:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:06.642 23:54:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.902 23:54:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.903 23:54:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:10:06.903 23:54:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:07.163 true 00:10:07.163 23:54:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:07.163 23:54:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.425 23:54:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.425 23:54:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:10:07.425 23:54:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:07.686 true 00:10:07.686 23:54:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:07.686 23:54:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.947 23:54:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.947 23:54:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:10:07.947 23:54:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:08.208 true 00:10:08.208 23:54:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:08.208 23:54:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.469 23:54:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.469 23:54:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:10:08.469 23:54:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:08.730 true 00:10:08.730 23:54:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:08.730 23:54:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.991 23:54:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.991 23:54:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:10:08.991 23:54:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:09.252 true 00:10:09.252 23:54:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:09.252 23:54:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.513 23:54:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.513 23:54:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:10:09.513 23:54:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:09.775 true 00:10:09.775 23:54:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:09.775 23:54:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.036 23:54:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.036 23:54:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:10:10.036 23:54:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:10.297 true 00:10:10.297 23:54:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:10.297 23:54:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.557 23:54:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.557 23:54:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:10:10.557 23:54:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:10.817 true 00:10:10.817 23:54:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:10.817 23:54:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.077 23:54:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.077 23:54:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:10:11.077 23:54:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:11.338 true 00:10:11.338 23:54:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:11.338 23:54:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.597 23:54:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.597 23:54:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:10:11.597 23:54:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:11.857 true 00:10:11.857 23:54:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:11.857 23:54:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.117 23:54:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.117 23:54:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:10:12.117 23:54:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:12.376 true 00:10:12.376 23:54:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:12.376 23:54:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.376 23:54:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.637 23:54:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:10:12.637 23:54:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:12.905 true 00:10:12.905 23:54:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:12.905 23:54:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.905 23:54:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.172 23:54:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:10:13.172 23:54:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:13.433 true 00:10:13.433 23:54:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:13.433 23:54:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.433 23:54:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.694 23:54:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:10:13.694 23:54:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:13.956 true 00:10:13.956 23:54:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:13.956 23:54:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.956 23:54:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.217 23:54:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:10:14.217 23:54:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:14.480 true 00:10:14.480 23:54:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:14.480 23:54:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.480 23:54:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.741 23:54:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:10:14.741 23:54:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:15.002 true 00:10:15.002 23:54:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:15.002 23:54:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.002 23:54:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.263 23:54:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:10:15.263 23:54:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:15.524 true 00:10:15.524 23:54:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:15.524 23:54:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.524 23:54:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.786 23:54:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:10:15.786 23:54:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:15.786 true 00:10:16.047 23:54:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:16.047 23:54:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.047 23:54:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.308 23:54:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:10:16.308 23:54:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:16.308 true 00:10:16.570 23:54:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:16.570 23:54:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.570 23:54:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.832 23:54:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:10:16.832 23:54:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:17.092 true 00:10:17.092 23:54:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:17.092 23:54:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.092 23:54:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.353 23:54:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:10:17.353 23:54:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:17.614 true 00:10:17.614 23:54:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:17.614 23:54:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.614 23:54:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.874 23:54:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:10:17.874 23:54:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:18.134 true 00:10:18.134 23:54:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:18.134 23:54:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.134 23:54:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.395 23:54:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:10:18.395 23:54:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:18.395 true 00:10:18.654 23:54:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:18.654 23:54:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.654 23:54:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.914 23:54:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:10:18.914 23:54:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:18.914 true 00:10:19.173 23:54:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:19.173 23:54:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.173 23:54:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.434 23:54:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:10:19.434 23:54:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:19.434 true 00:10:19.694 23:54:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:19.694 23:54:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.694 23:54:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.953 23:54:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:10:19.953 23:54:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:19.953 true 00:10:20.213 23:54:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:20.213 23:54:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.213 23:54:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.473 23:54:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:10:20.473 23:54:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:20.473 true 00:10:20.733 23:54:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:20.733 23:54:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.733 23:54:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.992 23:54:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:10:20.992 23:54:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:21.253 true 00:10:21.253 23:54:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:21.253 23:54:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.253 23:54:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.513 23:54:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:10:21.513 23:54:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:21.773 true 00:10:21.773 23:54:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:21.773 23:54:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.773 23:54:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.033 23:54:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:10:22.033 23:54:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:22.033 true 00:10:22.293 23:54:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:22.293 23:54:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.293 23:54:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.553 23:54:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:10:22.553 23:54:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:22.553 true 00:10:22.813 23:54:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:22.813 23:54:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.813 23:54:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.074 23:54:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:10:23.074 23:54:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:23.334 true 00:10:23.334 23:54:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:23.334 23:54:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.334 23:54:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.594 23:54:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:10:23.594 23:54:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:23.854 true 00:10:23.854 23:54:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:23.854 23:54:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.854 23:54:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.161 23:54:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:10:24.161 23:54:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:24.161 true 00:10:24.161 23:54:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:24.161 23:54:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.458 23:54:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.719 23:54:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:10:24.719 23:54:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:24.719 true 00:10:24.719 23:54:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:24.719 23:54:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.979 23:54:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.240 23:54:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:10:25.240 23:54:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:25.240 true 00:10:25.240 23:54:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:25.240 23:54:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.503 23:54:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.763 23:54:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:10:25.763 23:54:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:25.763 true 00:10:25.763 23:54:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:25.763 23:54:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.024 23:54:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.284 23:54:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:10:26.284 23:54:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:26.284 true 00:10:26.284 23:54:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:26.284 23:54:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.543 23:54:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.804 23:54:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:10:26.804 23:54:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:26.804 true 00:10:26.804 23:54:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:26.804 23:54:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.064 23:54:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.325 23:54:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:10:27.325 23:54:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:27.325 true 00:10:27.325 23:54:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:27.325 23:54:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.585 23:54:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.845 23:54:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:10:27.846 23:54:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:27.846 true 00:10:27.846 23:54:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:27.846 23:54:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.106 23:54:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.367 23:54:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:10:28.367 23:54:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:28.367 true 00:10:28.367 23:54:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:28.367 23:54:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.629 23:54:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.890 23:54:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:10:28.890 23:54:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:28.890 true 00:10:28.890 23:54:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:28.890 23:54:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.151 23:54:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.412 23:54:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:10:29.412 23:54:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:29.412 true 00:10:29.412 23:54:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:29.412 23:54:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.673 23:54:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.933 23:54:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:10:29.933 23:54:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:29.933 true 00:10:29.933 23:55:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:29.933 23:55:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.195 23:55:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.455 23:55:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:10:30.455 23:55:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:30.455 true 00:10:30.455 23:55:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:30.455 23:55:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.715 23:55:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.716 23:55:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:10:30.716 23:55:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:30.977 true 00:10:30.977 23:55:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:30.977 23:55:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.238 23:55:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.238 23:55:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:10:31.238 23:55:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:31.499 true 00:10:31.499 23:55:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:31.499 23:55:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.760 23:55:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.760 23:55:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:10:31.760 23:55:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:32.021 true 00:10:32.021 23:55:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:32.021 23:55:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.282 23:55:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.282 23:55:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1055 00:10:32.282 23:55:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:32.543 true 00:10:32.543 23:55:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:32.543 23:55:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.543 23:55:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.804 23:55:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1056 00:10:32.805 23:55:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:33.066 true 00:10:33.066 23:55:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:33.066 23:55:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.066 23:55:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.328 23:55:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1057 00:10:33.328 23:55:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:33.589 true 00:10:33.589 23:55:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:33.589 23:55:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.589 23:55:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.851 23:55:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1058 00:10:33.851 23:55:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:34.112 true 00:10:34.112 23:55:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:34.112 23:55:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.112 23:55:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.372 Initializing NVMe Controllers 00:10:34.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:34.372 Controller IO queue size 128, less than required. 00:10:34.372 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:34.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:34.372 Initialization complete. Launching workers. 00:10:34.372 ======================================================== 00:10:34.372 Latency(us) 00:10:34.372 Device Information : IOPS MiB/s Average min max 00:10:34.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 21903.83 10.70 5843.57 1878.39 9496.36 00:10:34.372 ======================================================== 00:10:34.372 Total : 21903.83 10.70 5843.57 1878.39 9496.36 00:10:34.372 00:10:34.372 23:55:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1059 00:10:34.372 23:55:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:10:34.632 true 00:10:34.632 23:55:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 271525 00:10:34.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (271525) - No such process 00:10:34.632 23:55:04 -- target/ns_hotplug_stress.sh@44 -- # wait 271525 00:10:34.632 23:55:04 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:34.632 23:55:04 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:10:34.632 23:55:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:34.632 23:55:04 -- nvmf/common.sh@117 -- # sync 00:10:34.632 23:55:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:34.632 23:55:04 -- nvmf/common.sh@120 -- # set +e 00:10:34.632 23:55:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.632 23:55:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:34.632 rmmod nvme_tcp 00:10:34.632 rmmod nvme_fabrics 00:10:34.632 rmmod nvme_keyring 00:10:34.632 23:55:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:34.632 23:55:04 -- nvmf/common.sh@124 -- # set -e 00:10:34.632 23:55:04 -- nvmf/common.sh@125 -- # return 0 00:10:34.632 23:55:04 -- nvmf/common.sh@478 -- # '[' -n 271155 ']' 00:10:34.632 23:55:04 -- nvmf/common.sh@479 -- # killprocess 271155 00:10:34.632 23:55:04 -- common/autotest_common.sh@936 -- # '[' -z 271155 ']' 00:10:34.632 23:55:04 -- common/autotest_common.sh@940 -- # kill -0 271155 00:10:34.632 23:55:04 -- common/autotest_common.sh@941 -- # uname 00:10:34.632 23:55:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:34.632 23:55:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 271155 00:10:34.632 23:55:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:34.632 23:55:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:34.632 23:55:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 271155' 00:10:34.632 killing process with pid 271155 00:10:34.632 23:55:04 -- common/autotest_common.sh@955 -- # kill 271155 00:10:34.632 23:55:04 -- common/autotest_common.sh@960 -- # wait 271155 00:10:34.893 23:55:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:34.893 23:55:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:34.893 23:55:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:34.893 23:55:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:34.893 23:55:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:34.893 23:55:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.893 23:55:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:34.893 23:55:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.803 23:55:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:36.803 00:10:36.803 real 0m42.590s 00:10:36.803 user 2m34.765s 00:10:36.803 sys 0m12.477s 00:10:36.803 23:55:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:36.803 23:55:06 -- common/autotest_common.sh@10 -- # set +x 00:10:36.803 ************************************ 00:10:36.803 END TEST nvmf_ns_hotplug_stress 00:10:36.803 ************************************ 00:10:36.803 23:55:07 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:36.803 23:55:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:36.803 23:55:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:36.803 23:55:07 -- common/autotest_common.sh@10 -- # set +x 00:10:37.064 ************************************ 00:10:37.064 START TEST nvmf_connect_stress 00:10:37.064 ************************************ 00:10:37.064 23:55:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:37.064 * Looking for test storage... 00:10:37.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.064 23:55:07 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.064 23:55:07 -- nvmf/common.sh@7 -- # uname -s 00:10:37.064 23:55:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.064 23:55:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.064 23:55:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.064 23:55:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.064 23:55:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.064 23:55:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.064 23:55:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.064 23:55:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.064 23:55:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.064 23:55:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.064 23:55:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:37.064 23:55:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:37.064 23:55:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.064 23:55:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.064 23:55:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.064 23:55:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.064 23:55:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.064 23:55:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.064 23:55:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.064 23:55:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.064 23:55:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.064 23:55:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.064 23:55:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.064 23:55:07 -- paths/export.sh@5 -- # export PATH 00:10:37.064 23:55:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.064 23:55:07 -- nvmf/common.sh@47 -- # : 0 00:10:37.064 23:55:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.064 23:55:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.064 23:55:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.064 23:55:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.064 23:55:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.064 23:55:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.064 23:55:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.064 23:55:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.064 23:55:07 -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:37.064 23:55:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:37.064 23:55:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.064 23:55:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:37.064 23:55:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:37.064 23:55:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:37.065 23:55:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.065 23:55:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.065 23:55:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.325 23:55:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:37.325 23:55:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:37.325 23:55:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:37.325 23:55:07 -- common/autotest_common.sh@10 -- # set +x 00:10:43.911 23:55:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:43.911 23:55:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:43.911 23:55:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:43.911 23:55:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:43.911 23:55:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:43.911 23:55:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:43.911 23:55:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:43.911 23:55:13 -- nvmf/common.sh@295 -- # net_devs=() 00:10:43.911 23:55:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:43.911 23:55:13 -- nvmf/common.sh@296 -- # e810=() 00:10:43.911 23:55:13 -- nvmf/common.sh@296 -- # local -ga e810 00:10:43.911 23:55:13 -- nvmf/common.sh@297 -- # x722=() 00:10:43.911 23:55:13 -- nvmf/common.sh@297 -- # local -ga x722 00:10:43.911 23:55:13 -- nvmf/common.sh@298 -- # mlx=() 00:10:43.911 23:55:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:43.911 23:55:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.911 23:55:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.911 23:55:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.911 23:55:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.911 23:55:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.911 23:55:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.911 23:55:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.911 23:55:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.911 23:55:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.911 23:55:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.911 23:55:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.911 23:55:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:43.911 23:55:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:43.911 23:55:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:43.911 23:55:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.911 23:55:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:43.911 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:43.911 23:55:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.911 23:55:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:43.911 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:43.911 23:55:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:43.911 23:55:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.911 23:55:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.911 23:55:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:43.911 23:55:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.911 23:55:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:43.911 Found net devices under 0000:31:00.0: cvl_0_0 00:10:43.911 23:55:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.911 23:55:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.911 23:55:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.911 23:55:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:43.911 23:55:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.911 23:55:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:43.911 Found net devices under 0000:31:00.1: cvl_0_1 00:10:43.911 23:55:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.911 23:55:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:43.911 23:55:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:43.911 23:55:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:43.911 23:55:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.911 23:55:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.911 23:55:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.911 23:55:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:43.911 23:55:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.911 23:55:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.911 23:55:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:43.911 23:55:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.911 23:55:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.911 23:55:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:43.911 23:55:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:43.911 23:55:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.911 23:55:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.911 23:55:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.911 23:55:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.911 23:55:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:43.911 23:55:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.911 23:55:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.911 23:55:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.911 23:55:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:43.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:10:43.911 00:10:43.911 --- 10.0.0.2 ping statistics --- 00:10:43.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.911 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:10:43.911 23:55:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:10:43.911 00:10:43.911 --- 10.0.0.1 ping statistics --- 00:10:43.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.911 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:10:43.911 23:55:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.911 23:55:13 -- nvmf/common.sh@411 -- # return 0 00:10:43.911 23:55:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:43.911 23:55:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.911 23:55:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:43.911 23:55:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.911 23:55:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:43.911 23:55:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:43.911 23:55:14 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:43.911 23:55:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:43.911 23:55:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:43.911 23:55:14 -- common/autotest_common.sh@10 -- # set +x 00:10:43.911 23:55:14 -- nvmf/common.sh@470 -- # nvmfpid=282050 00:10:43.911 23:55:14 -- nvmf/common.sh@471 -- # waitforlisten 282050 00:10:43.911 23:55:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:43.911 23:55:14 -- common/autotest_common.sh@817 -- # '[' -z 282050 ']' 00:10:43.911 23:55:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.911 23:55:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:43.911 23:55:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.911 23:55:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:43.911 23:55:14 -- common/autotest_common.sh@10 -- # set +x 00:10:43.911 [2024-04-26 23:55:14.084275] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:10:43.912 [2024-04-26 23:55:14.084338] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.912 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.172 [2024-04-26 23:55:14.158135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:44.172 [2024-04-26 23:55:14.231397] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.172 [2024-04-26 23:55:14.231440] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.172 [2024-04-26 23:55:14.231448] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.172 [2024-04-26 23:55:14.231454] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.172 [2024-04-26 23:55:14.231459] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.172 [2024-04-26 23:55:14.231579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.172 [2024-04-26 23:55:14.231735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.172 [2024-04-26 23:55:14.231736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.743 23:55:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:44.743 23:55:14 -- common/autotest_common.sh@850 -- # return 0 00:10:44.743 23:55:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:44.743 23:55:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:44.743 23:55:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.743 23:55:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.743 23:55:14 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.743 23:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.743 23:55:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.743 [2024-04-26 23:55:14.911578] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.743 23:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.743 23:55:14 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:44.743 23:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.743 23:55:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.743 23:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.743 23:55:14 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.743 23:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.743 23:55:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.743 [2024-04-26 23:55:14.943983] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.744 23:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.744 23:55:14 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:44.744 23:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.744 23:55:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.744 NULL1 00:10:44.744 23:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.744 23:55:14 -- target/connect_stress.sh@21 -- # PERF_PID=282142 00:10:44.744 23:55:14 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:45.005 23:55:14 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:45.005 23:55:14 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:45.005 23:55:14 -- target/connect_stress.sh@27 -- # seq 1 20 00:10:45.005 23:55:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:14 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:14 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:14 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:14 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:14 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.005 23:55:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:14 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:14 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.005 23:55:15 -- target/connect_stress.sh@28 -- # cat 00:10:45.005 23:55:15 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:45.005 23:55:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.005 23:55:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:45.005 23:55:15 -- common/autotest_common.sh@10 -- # set +x 00:10:45.266 23:55:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:45.266 23:55:15 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:45.266 23:55:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.266 23:55:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:45.266 23:55:15 -- common/autotest_common.sh@10 -- # set +x 00:10:45.527 23:55:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:45.527 23:55:15 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:45.527 23:55:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.527 23:55:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:45.527 23:55:15 -- common/autotest_common.sh@10 -- # set +x 00:10:46.099 23:55:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:46.099 23:55:16 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:46.099 23:55:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.099 23:55:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:46.099 23:55:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.362 23:55:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:46.362 23:55:16 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:46.362 23:55:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.362 23:55:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:46.362 23:55:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.623 23:55:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:46.623 23:55:16 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:46.623 23:55:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.623 23:55:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:46.623 23:55:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.884 23:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:46.884 23:55:17 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:46.884 23:55:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.884 23:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:46.884 23:55:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.144 23:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:47.145 23:55:17 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:47.145 23:55:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.145 23:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:47.145 23:55:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.717 23:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:47.717 23:55:17 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:47.717 23:55:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.717 23:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:47.717 23:55:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.978 23:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:47.978 23:55:17 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:47.978 23:55:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.978 23:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:47.978 23:55:17 -- common/autotest_common.sh@10 -- # set +x 00:10:48.238 23:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.238 23:55:18 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:48.238 23:55:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.238 23:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.238 23:55:18 -- common/autotest_common.sh@10 -- # set +x 00:10:48.499 23:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.499 23:55:18 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:48.499 23:55:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.499 23:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.499 23:55:18 -- common/autotest_common.sh@10 -- # set +x 00:10:48.760 23:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.760 23:55:18 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:48.760 23:55:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.760 23:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.760 23:55:18 -- common/autotest_common.sh@10 -- # set +x 00:10:49.332 23:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.332 23:55:19 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:49.332 23:55:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.332 23:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.332 23:55:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.594 23:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.594 23:55:19 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:49.594 23:55:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.594 23:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.594 23:55:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.855 23:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.855 23:55:19 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:49.855 23:55:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.855 23:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.855 23:55:19 -- common/autotest_common.sh@10 -- # set +x 00:10:50.116 23:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.116 23:55:20 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:50.116 23:55:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.116 23:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.116 23:55:20 -- common/autotest_common.sh@10 -- # set +x 00:10:50.377 23:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.377 23:55:20 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:50.377 23:55:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.377 23:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.377 23:55:20 -- common/autotest_common.sh@10 -- # set +x 00:10:50.949 23:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.949 23:55:20 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:50.949 23:55:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.949 23:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.949 23:55:20 -- common/autotest_common.sh@10 -- # set +x 00:10:51.210 23:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.210 23:55:21 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:51.210 23:55:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.210 23:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.210 23:55:21 -- common/autotest_common.sh@10 -- # set +x 00:10:51.470 23:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.470 23:55:21 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:51.470 23:55:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.470 23:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.470 23:55:21 -- common/autotest_common.sh@10 -- # set +x 00:10:51.731 23:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.731 23:55:21 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:51.731 23:55:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.731 23:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.731 23:55:21 -- common/autotest_common.sh@10 -- # set +x 00:10:52.019 23:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.019 23:55:22 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:52.019 23:55:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.019 23:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.019 23:55:22 -- common/autotest_common.sh@10 -- # set +x 00:10:52.604 23:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.604 23:55:22 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:52.604 23:55:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.604 23:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.604 23:55:22 -- common/autotest_common.sh@10 -- # set +x 00:10:52.872 23:55:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.872 23:55:22 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:52.872 23:55:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.872 23:55:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.872 23:55:22 -- common/autotest_common.sh@10 -- # set +x 00:10:53.139 23:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.139 23:55:23 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:53.139 23:55:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.139 23:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.139 23:55:23 -- common/autotest_common.sh@10 -- # set +x 00:10:53.399 23:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.399 23:55:23 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:53.399 23:55:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.399 23:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.399 23:55:23 -- common/autotest_common.sh@10 -- # set +x 00:10:53.661 23:55:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.661 23:55:23 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:53.661 23:55:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.661 23:55:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.661 23:55:23 -- common/autotest_common.sh@10 -- # set +x 00:10:54.231 23:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.231 23:55:24 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:54.231 23:55:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.231 23:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.231 23:55:24 -- common/autotest_common.sh@10 -- # set +x 00:10:54.491 23:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.491 23:55:24 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:54.491 23:55:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.491 23:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.491 23:55:24 -- common/autotest_common.sh@10 -- # set +x 00:10:54.751 23:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.751 23:55:24 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:54.751 23:55:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.751 23:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.751 23:55:24 -- common/autotest_common.sh@10 -- # set +x 00:10:55.012 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:55.012 23:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.012 23:55:25 -- target/connect_stress.sh@34 -- # kill -0 282142 00:10:55.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (282142) - No such process 00:10:55.012 23:55:25 -- target/connect_stress.sh@38 -- # wait 282142 00:10:55.012 23:55:25 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:55.012 23:55:25 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:55.012 23:55:25 -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:55.012 23:55:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:55.012 23:55:25 -- nvmf/common.sh@117 -- # sync 00:10:55.012 23:55:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.012 23:55:25 -- nvmf/common.sh@120 -- # set +e 00:10:55.012 23:55:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.012 23:55:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.012 rmmod nvme_tcp 00:10:55.012 rmmod nvme_fabrics 00:10:55.012 rmmod nvme_keyring 00:10:55.012 23:55:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.274 23:55:25 -- nvmf/common.sh@124 -- # set -e 00:10:55.274 23:55:25 -- nvmf/common.sh@125 -- # return 0 00:10:55.274 23:55:25 -- nvmf/common.sh@478 -- # '[' -n 282050 ']' 00:10:55.274 23:55:25 -- nvmf/common.sh@479 -- # killprocess 282050 00:10:55.274 23:55:25 -- common/autotest_common.sh@936 -- # '[' -z 282050 ']' 00:10:55.274 23:55:25 -- common/autotest_common.sh@940 -- # kill -0 282050 00:10:55.274 23:55:25 -- common/autotest_common.sh@941 -- # uname 00:10:55.274 23:55:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:55.274 23:55:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 282050 00:10:55.274 23:55:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:55.274 23:55:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:55.274 23:55:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 282050' 00:10:55.274 killing process with pid 282050 00:10:55.274 23:55:25 -- common/autotest_common.sh@955 -- # kill 282050 00:10:55.274 23:55:25 -- common/autotest_common.sh@960 -- # wait 282050 00:10:55.274 23:55:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:55.274 23:55:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:55.274 23:55:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:55.274 23:55:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.274 23:55:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:55.274 23:55:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.274 23:55:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.274 23:55:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.822 23:55:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:57.822 00:10:57.822 real 0m20.318s 00:10:57.822 user 0m41.942s 00:10:57.822 sys 0m8.353s 00:10:57.822 23:55:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:57.822 23:55:27 -- common/autotest_common.sh@10 -- # set +x 00:10:57.822 ************************************ 00:10:57.822 END TEST nvmf_connect_stress 00:10:57.822 ************************************ 00:10:57.822 23:55:27 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:57.822 23:55:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:57.822 23:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:57.822 23:55:27 -- common/autotest_common.sh@10 -- # set +x 00:10:57.822 ************************************ 00:10:57.822 START TEST nvmf_fused_ordering 00:10:57.822 ************************************ 00:10:57.822 23:55:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:57.822 * Looking for test storage... 00:10:57.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.823 23:55:27 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.823 23:55:27 -- nvmf/common.sh@7 -- # uname -s 00:10:57.823 23:55:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.823 23:55:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.823 23:55:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.823 23:55:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.823 23:55:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.823 23:55:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.823 23:55:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.823 23:55:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.823 23:55:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.823 23:55:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.823 23:55:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.823 23:55:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.823 23:55:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.823 23:55:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.823 23:55:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.823 23:55:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.823 23:55:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.823 23:55:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.823 23:55:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.823 23:55:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.823 23:55:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.823 23:55:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.823 23:55:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.823 23:55:27 -- paths/export.sh@5 -- # export PATH 00:10:57.823 23:55:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.823 23:55:27 -- nvmf/common.sh@47 -- # : 0 00:10:57.823 23:55:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.823 23:55:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.823 23:55:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.823 23:55:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.823 23:55:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.823 23:55:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.823 23:55:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.823 23:55:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.823 23:55:27 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:57.823 23:55:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:57.823 23:55:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.823 23:55:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:57.823 23:55:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:57.823 23:55:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:57.823 23:55:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.823 23:55:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.823 23:55:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.823 23:55:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:57.823 23:55:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:57.823 23:55:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:57.823 23:55:27 -- common/autotest_common.sh@10 -- # set +x 00:11:04.498 23:55:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:04.498 23:55:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.498 23:55:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.498 23:55:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.498 23:55:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.498 23:55:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.498 23:55:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.498 23:55:34 -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.498 23:55:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.498 23:55:34 -- nvmf/common.sh@296 -- # e810=() 00:11:04.498 23:55:34 -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.498 23:55:34 -- nvmf/common.sh@297 -- # x722=() 00:11:04.498 23:55:34 -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.498 23:55:34 -- nvmf/common.sh@298 -- # mlx=() 00:11:04.498 23:55:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.498 23:55:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.498 23:55:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.498 23:55:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.498 23:55:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.498 23:55:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.498 23:55:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.498 23:55:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.498 23:55:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.498 23:55:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.498 23:55:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.498 23:55:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.498 23:55:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.498 23:55:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:04.498 23:55:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.498 23:55:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.498 23:55:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:04.498 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:04.498 23:55:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.498 23:55:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:04.498 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:04.498 23:55:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.498 23:55:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:04.498 23:55:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.498 23:55:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.498 23:55:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:04.498 23:55:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.498 23:55:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:04.498 Found net devices under 0000:31:00.0: cvl_0_0 00:11:04.498 23:55:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.498 23:55:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.499 23:55:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.499 23:55:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:04.499 23:55:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.499 23:55:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:04.499 Found net devices under 0000:31:00.1: cvl_0_1 00:11:04.499 23:55:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.499 23:55:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:04.499 23:55:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:04.499 23:55:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:04.499 23:55:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:04.499 23:55:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:04.499 23:55:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.499 23:55:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.499 23:55:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.499 23:55:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:04.499 23:55:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.499 23:55:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.499 23:55:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:04.499 23:55:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.499 23:55:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.499 23:55:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:04.499 23:55:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:04.499 23:55:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.499 23:55:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.499 23:55:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.499 23:55:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.499 23:55:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:04.499 23:55:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.499 23:55:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.760 23:55:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.760 23:55:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:04.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:11:04.760 00:11:04.760 --- 10.0.0.2 ping statistics --- 00:11:04.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.760 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:11:04.760 23:55:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:11:04.760 00:11:04.760 --- 10.0.0.1 ping statistics --- 00:11:04.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.760 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:11:04.760 23:55:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.760 23:55:34 -- nvmf/common.sh@411 -- # return 0 00:11:04.760 23:55:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:04.760 23:55:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.760 23:55:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:04.760 23:55:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:04.760 23:55:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.760 23:55:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:04.760 23:55:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:04.760 23:55:34 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:04.760 23:55:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:04.760 23:55:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:04.760 23:55:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.760 23:55:34 -- nvmf/common.sh@470 -- # nvmfpid=288545 00:11:04.760 23:55:34 -- nvmf/common.sh@471 -- # waitforlisten 288545 00:11:04.760 23:55:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:04.760 23:55:34 -- common/autotest_common.sh@817 -- # '[' -z 288545 ']' 00:11:04.760 23:55:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.760 23:55:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:04.760 23:55:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.760 23:55:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:04.760 23:55:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.760 [2024-04-26 23:55:34.862474] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:11:04.760 [2024-04-26 23:55:34.862526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.760 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.760 [2024-04-26 23:55:34.928643] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.020 [2024-04-26 23:55:34.991401] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.020 [2024-04-26 23:55:34.991439] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.020 [2024-04-26 23:55:34.991447] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.020 [2024-04-26 23:55:34.991453] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.020 [2024-04-26 23:55:34.991459] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.020 [2024-04-26 23:55:34.991478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.592 23:55:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:05.592 23:55:35 -- common/autotest_common.sh@850 -- # return 0 00:11:05.592 23:55:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:05.592 23:55:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:05.592 23:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 23:55:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.592 23:55:35 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:05.592 23:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.592 23:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 [2024-04-26 23:55:35.665787] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.592 23:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.592 23:55:35 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:05.592 23:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.592 23:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 23:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.592 23:55:35 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.592 23:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.592 23:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 [2024-04-26 23:55:35.689957] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.592 23:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.592 23:55:35 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:05.592 23:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.592 23:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 NULL1 00:11:05.592 23:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.592 23:55:35 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:05.592 23:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.592 23:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 23:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.592 23:55:35 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:05.592 23:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.592 23:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 23:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.592 23:55:35 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:05.592 [2024-04-26 23:55:35.752564] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:11:05.592 [2024-04-26 23:55:35.752605] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288596 ] 00:11:05.592 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.163 Attached to nqn.2016-06.io.spdk:cnode1 00:11:06.163 Namespace ID: 1 size: 1GB 00:11:06.163 fused_ordering(0) 00:11:06.163 fused_ordering(1) 00:11:06.163 fused_ordering(2) 00:11:06.163 fused_ordering(3) 00:11:06.163 fused_ordering(4) 00:11:06.163 fused_ordering(5) 00:11:06.163 fused_ordering(6) 00:11:06.163 fused_ordering(7) 00:11:06.163 fused_ordering(8) 00:11:06.163 fused_ordering(9) 00:11:06.163 fused_ordering(10) 00:11:06.163 fused_ordering(11) 00:11:06.163 fused_ordering(12) 00:11:06.163 fused_ordering(13) 00:11:06.163 fused_ordering(14) 00:11:06.163 fused_ordering(15) 00:11:06.163 fused_ordering(16) 00:11:06.163 fused_ordering(17) 00:11:06.163 fused_ordering(18) 00:11:06.163 fused_ordering(19) 00:11:06.163 fused_ordering(20) 00:11:06.163 fused_ordering(21) 00:11:06.163 fused_ordering(22) 00:11:06.163 fused_ordering(23) 00:11:06.163 fused_ordering(24) 00:11:06.163 fused_ordering(25) 00:11:06.163 fused_ordering(26) 00:11:06.163 fused_ordering(27) 00:11:06.163 fused_ordering(28) 00:11:06.163 fused_ordering(29) 00:11:06.163 fused_ordering(30) 00:11:06.163 fused_ordering(31) 00:11:06.163 fused_ordering(32) 00:11:06.163 fused_ordering(33) 00:11:06.163 fused_ordering(34) 00:11:06.163 fused_ordering(35) 00:11:06.163 fused_ordering(36) 00:11:06.164 fused_ordering(37) 00:11:06.164 fused_ordering(38) 00:11:06.164 fused_ordering(39) 00:11:06.164 fused_ordering(40) 00:11:06.164 fused_ordering(41) 00:11:06.164 fused_ordering(42) 00:11:06.164 fused_ordering(43) 00:11:06.164 fused_ordering(44) 00:11:06.164 fused_ordering(45) 00:11:06.164 fused_ordering(46) 00:11:06.164 fused_ordering(47) 00:11:06.164 fused_ordering(48) 00:11:06.164 fused_ordering(49) 00:11:06.164 fused_ordering(50) 00:11:06.164 fused_ordering(51) 00:11:06.164 fused_ordering(52) 00:11:06.164 fused_ordering(53) 00:11:06.164 fused_ordering(54) 00:11:06.164 fused_ordering(55) 00:11:06.164 fused_ordering(56) 00:11:06.164 fused_ordering(57) 00:11:06.164 fused_ordering(58) 00:11:06.164 fused_ordering(59) 00:11:06.164 fused_ordering(60) 00:11:06.164 fused_ordering(61) 00:11:06.164 fused_ordering(62) 00:11:06.164 fused_ordering(63) 00:11:06.164 fused_ordering(64) 00:11:06.164 fused_ordering(65) 00:11:06.164 fused_ordering(66) 00:11:06.164 fused_ordering(67) 00:11:06.164 fused_ordering(68) 00:11:06.164 fused_ordering(69) 00:11:06.164 fused_ordering(70) 00:11:06.164 fused_ordering(71) 00:11:06.164 fused_ordering(72) 00:11:06.164 fused_ordering(73) 00:11:06.164 fused_ordering(74) 00:11:06.164 fused_ordering(75) 00:11:06.164 fused_ordering(76) 00:11:06.164 fused_ordering(77) 00:11:06.164 fused_ordering(78) 00:11:06.164 fused_ordering(79) 00:11:06.164 fused_ordering(80) 00:11:06.164 fused_ordering(81) 00:11:06.164 fused_ordering(82) 00:11:06.164 fused_ordering(83) 00:11:06.164 fused_ordering(84) 00:11:06.164 fused_ordering(85) 00:11:06.164 fused_ordering(86) 00:11:06.164 fused_ordering(87) 00:11:06.164 fused_ordering(88) 00:11:06.164 fused_ordering(89) 00:11:06.164 fused_ordering(90) 00:11:06.164 fused_ordering(91) 00:11:06.164 fused_ordering(92) 00:11:06.164 fused_ordering(93) 00:11:06.164 fused_ordering(94) 00:11:06.164 fused_ordering(95) 00:11:06.164 fused_ordering(96) 00:11:06.164 fused_ordering(97) 00:11:06.164 fused_ordering(98) 00:11:06.164 fused_ordering(99) 00:11:06.164 fused_ordering(100) 00:11:06.164 fused_ordering(101) 00:11:06.164 fused_ordering(102) 00:11:06.164 fused_ordering(103) 00:11:06.164 fused_ordering(104) 00:11:06.164 fused_ordering(105) 00:11:06.164 fused_ordering(106) 00:11:06.164 fused_ordering(107) 00:11:06.164 fused_ordering(108) 00:11:06.164 fused_ordering(109) 00:11:06.164 fused_ordering(110) 00:11:06.164 fused_ordering(111) 00:11:06.164 fused_ordering(112) 00:11:06.164 fused_ordering(113) 00:11:06.164 fused_ordering(114) 00:11:06.164 fused_ordering(115) 00:11:06.164 fused_ordering(116) 00:11:06.164 fused_ordering(117) 00:11:06.164 fused_ordering(118) 00:11:06.164 fused_ordering(119) 00:11:06.164 fused_ordering(120) 00:11:06.164 fused_ordering(121) 00:11:06.164 fused_ordering(122) 00:11:06.164 fused_ordering(123) 00:11:06.164 fused_ordering(124) 00:11:06.164 fused_ordering(125) 00:11:06.164 fused_ordering(126) 00:11:06.164 fused_ordering(127) 00:11:06.164 fused_ordering(128) 00:11:06.164 fused_ordering(129) 00:11:06.164 fused_ordering(130) 00:11:06.164 fused_ordering(131) 00:11:06.164 fused_ordering(132) 00:11:06.164 fused_ordering(133) 00:11:06.164 fused_ordering(134) 00:11:06.164 fused_ordering(135) 00:11:06.164 fused_ordering(136) 00:11:06.164 fused_ordering(137) 00:11:06.164 fused_ordering(138) 00:11:06.164 fused_ordering(139) 00:11:06.164 fused_ordering(140) 00:11:06.164 fused_ordering(141) 00:11:06.164 fused_ordering(142) 00:11:06.164 fused_ordering(143) 00:11:06.164 fused_ordering(144) 00:11:06.164 fused_ordering(145) 00:11:06.164 fused_ordering(146) 00:11:06.164 fused_ordering(147) 00:11:06.164 fused_ordering(148) 00:11:06.164 fused_ordering(149) 00:11:06.164 fused_ordering(150) 00:11:06.164 fused_ordering(151) 00:11:06.164 fused_ordering(152) 00:11:06.164 fused_ordering(153) 00:11:06.164 fused_ordering(154) 00:11:06.164 fused_ordering(155) 00:11:06.164 fused_ordering(156) 00:11:06.164 fused_ordering(157) 00:11:06.164 fused_ordering(158) 00:11:06.164 fused_ordering(159) 00:11:06.164 fused_ordering(160) 00:11:06.164 fused_ordering(161) 00:11:06.164 fused_ordering(162) 00:11:06.164 fused_ordering(163) 00:11:06.164 fused_ordering(164) 00:11:06.164 fused_ordering(165) 00:11:06.164 fused_ordering(166) 00:11:06.164 fused_ordering(167) 00:11:06.164 fused_ordering(168) 00:11:06.164 fused_ordering(169) 00:11:06.164 fused_ordering(170) 00:11:06.164 fused_ordering(171) 00:11:06.164 fused_ordering(172) 00:11:06.164 fused_ordering(173) 00:11:06.164 fused_ordering(174) 00:11:06.164 fused_ordering(175) 00:11:06.164 fused_ordering(176) 00:11:06.164 fused_ordering(177) 00:11:06.164 fused_ordering(178) 00:11:06.164 fused_ordering(179) 00:11:06.164 fused_ordering(180) 00:11:06.164 fused_ordering(181) 00:11:06.164 fused_ordering(182) 00:11:06.164 fused_ordering(183) 00:11:06.164 fused_ordering(184) 00:11:06.164 fused_ordering(185) 00:11:06.164 fused_ordering(186) 00:11:06.164 fused_ordering(187) 00:11:06.164 fused_ordering(188) 00:11:06.164 fused_ordering(189) 00:11:06.164 fused_ordering(190) 00:11:06.164 fused_ordering(191) 00:11:06.164 fused_ordering(192) 00:11:06.164 fused_ordering(193) 00:11:06.164 fused_ordering(194) 00:11:06.164 fused_ordering(195) 00:11:06.164 fused_ordering(196) 00:11:06.164 fused_ordering(197) 00:11:06.164 fused_ordering(198) 00:11:06.164 fused_ordering(199) 00:11:06.164 fused_ordering(200) 00:11:06.164 fused_ordering(201) 00:11:06.164 fused_ordering(202) 00:11:06.164 fused_ordering(203) 00:11:06.164 fused_ordering(204) 00:11:06.164 fused_ordering(205) 00:11:06.425 fused_ordering(206) 00:11:06.425 fused_ordering(207) 00:11:06.425 fused_ordering(208) 00:11:06.425 fused_ordering(209) 00:11:06.425 fused_ordering(210) 00:11:06.425 fused_ordering(211) 00:11:06.425 fused_ordering(212) 00:11:06.425 fused_ordering(213) 00:11:06.425 fused_ordering(214) 00:11:06.425 fused_ordering(215) 00:11:06.425 fused_ordering(216) 00:11:06.425 fused_ordering(217) 00:11:06.425 fused_ordering(218) 00:11:06.425 fused_ordering(219) 00:11:06.425 fused_ordering(220) 00:11:06.425 fused_ordering(221) 00:11:06.425 fused_ordering(222) 00:11:06.425 fused_ordering(223) 00:11:06.425 fused_ordering(224) 00:11:06.425 fused_ordering(225) 00:11:06.425 fused_ordering(226) 00:11:06.425 fused_ordering(227) 00:11:06.425 fused_ordering(228) 00:11:06.425 fused_ordering(229) 00:11:06.425 fused_ordering(230) 00:11:06.425 fused_ordering(231) 00:11:06.425 fused_ordering(232) 00:11:06.425 fused_ordering(233) 00:11:06.425 fused_ordering(234) 00:11:06.425 fused_ordering(235) 00:11:06.425 fused_ordering(236) 00:11:06.425 fused_ordering(237) 00:11:06.425 fused_ordering(238) 00:11:06.425 fused_ordering(239) 00:11:06.425 fused_ordering(240) 00:11:06.425 fused_ordering(241) 00:11:06.425 fused_ordering(242) 00:11:06.425 fused_ordering(243) 00:11:06.425 fused_ordering(244) 00:11:06.425 fused_ordering(245) 00:11:06.425 fused_ordering(246) 00:11:06.425 fused_ordering(247) 00:11:06.425 fused_ordering(248) 00:11:06.425 fused_ordering(249) 00:11:06.425 fused_ordering(250) 00:11:06.425 fused_ordering(251) 00:11:06.425 fused_ordering(252) 00:11:06.425 fused_ordering(253) 00:11:06.425 fused_ordering(254) 00:11:06.425 fused_ordering(255) 00:11:06.425 fused_ordering(256) 00:11:06.425 fused_ordering(257) 00:11:06.425 fused_ordering(258) 00:11:06.425 fused_ordering(259) 00:11:06.425 fused_ordering(260) 00:11:06.425 fused_ordering(261) 00:11:06.425 fused_ordering(262) 00:11:06.425 fused_ordering(263) 00:11:06.425 fused_ordering(264) 00:11:06.425 fused_ordering(265) 00:11:06.425 fused_ordering(266) 00:11:06.425 fused_ordering(267) 00:11:06.425 fused_ordering(268) 00:11:06.425 fused_ordering(269) 00:11:06.425 fused_ordering(270) 00:11:06.425 fused_ordering(271) 00:11:06.425 fused_ordering(272) 00:11:06.425 fused_ordering(273) 00:11:06.425 fused_ordering(274) 00:11:06.425 fused_ordering(275) 00:11:06.425 fused_ordering(276) 00:11:06.425 fused_ordering(277) 00:11:06.425 fused_ordering(278) 00:11:06.425 fused_ordering(279) 00:11:06.425 fused_ordering(280) 00:11:06.425 fused_ordering(281) 00:11:06.425 fused_ordering(282) 00:11:06.425 fused_ordering(283) 00:11:06.425 fused_ordering(284) 00:11:06.425 fused_ordering(285) 00:11:06.425 fused_ordering(286) 00:11:06.425 fused_ordering(287) 00:11:06.425 fused_ordering(288) 00:11:06.425 fused_ordering(289) 00:11:06.425 fused_ordering(290) 00:11:06.425 fused_ordering(291) 00:11:06.425 fused_ordering(292) 00:11:06.425 fused_ordering(293) 00:11:06.425 fused_ordering(294) 00:11:06.425 fused_ordering(295) 00:11:06.425 fused_ordering(296) 00:11:06.425 fused_ordering(297) 00:11:06.425 fused_ordering(298) 00:11:06.425 fused_ordering(299) 00:11:06.425 fused_ordering(300) 00:11:06.425 fused_ordering(301) 00:11:06.425 fused_ordering(302) 00:11:06.425 fused_ordering(303) 00:11:06.425 fused_ordering(304) 00:11:06.425 fused_ordering(305) 00:11:06.425 fused_ordering(306) 00:11:06.425 fused_ordering(307) 00:11:06.425 fused_ordering(308) 00:11:06.425 fused_ordering(309) 00:11:06.425 fused_ordering(310) 00:11:06.425 fused_ordering(311) 00:11:06.425 fused_ordering(312) 00:11:06.425 fused_ordering(313) 00:11:06.425 fused_ordering(314) 00:11:06.425 fused_ordering(315) 00:11:06.425 fused_ordering(316) 00:11:06.425 fused_ordering(317) 00:11:06.425 fused_ordering(318) 00:11:06.425 fused_ordering(319) 00:11:06.425 fused_ordering(320) 00:11:06.425 fused_ordering(321) 00:11:06.426 fused_ordering(322) 00:11:06.426 fused_ordering(323) 00:11:06.426 fused_ordering(324) 00:11:06.426 fused_ordering(325) 00:11:06.426 fused_ordering(326) 00:11:06.426 fused_ordering(327) 00:11:06.426 fused_ordering(328) 00:11:06.426 fused_ordering(329) 00:11:06.426 fused_ordering(330) 00:11:06.426 fused_ordering(331) 00:11:06.426 fused_ordering(332) 00:11:06.426 fused_ordering(333) 00:11:06.426 fused_ordering(334) 00:11:06.426 fused_ordering(335) 00:11:06.426 fused_ordering(336) 00:11:06.426 fused_ordering(337) 00:11:06.426 fused_ordering(338) 00:11:06.426 fused_ordering(339) 00:11:06.426 fused_ordering(340) 00:11:06.426 fused_ordering(341) 00:11:06.426 fused_ordering(342) 00:11:06.426 fused_ordering(343) 00:11:06.426 fused_ordering(344) 00:11:06.426 fused_ordering(345) 00:11:06.426 fused_ordering(346) 00:11:06.426 fused_ordering(347) 00:11:06.426 fused_ordering(348) 00:11:06.426 fused_ordering(349) 00:11:06.426 fused_ordering(350) 00:11:06.426 fused_ordering(351) 00:11:06.426 fused_ordering(352) 00:11:06.426 fused_ordering(353) 00:11:06.426 fused_ordering(354) 00:11:06.426 fused_ordering(355) 00:11:06.426 fused_ordering(356) 00:11:06.426 fused_ordering(357) 00:11:06.426 fused_ordering(358) 00:11:06.426 fused_ordering(359) 00:11:06.426 fused_ordering(360) 00:11:06.426 fused_ordering(361) 00:11:06.426 fused_ordering(362) 00:11:06.426 fused_ordering(363) 00:11:06.426 fused_ordering(364) 00:11:06.426 fused_ordering(365) 00:11:06.426 fused_ordering(366) 00:11:06.426 fused_ordering(367) 00:11:06.426 fused_ordering(368) 00:11:06.426 fused_ordering(369) 00:11:06.426 fused_ordering(370) 00:11:06.426 fused_ordering(371) 00:11:06.426 fused_ordering(372) 00:11:06.426 fused_ordering(373) 00:11:06.426 fused_ordering(374) 00:11:06.426 fused_ordering(375) 00:11:06.426 fused_ordering(376) 00:11:06.426 fused_ordering(377) 00:11:06.426 fused_ordering(378) 00:11:06.426 fused_ordering(379) 00:11:06.426 fused_ordering(380) 00:11:06.426 fused_ordering(381) 00:11:06.426 fused_ordering(382) 00:11:06.426 fused_ordering(383) 00:11:06.426 fused_ordering(384) 00:11:06.426 fused_ordering(385) 00:11:06.426 fused_ordering(386) 00:11:06.426 fused_ordering(387) 00:11:06.426 fused_ordering(388) 00:11:06.426 fused_ordering(389) 00:11:06.426 fused_ordering(390) 00:11:06.426 fused_ordering(391) 00:11:06.426 fused_ordering(392) 00:11:06.426 fused_ordering(393) 00:11:06.426 fused_ordering(394) 00:11:06.426 fused_ordering(395) 00:11:06.426 fused_ordering(396) 00:11:06.426 fused_ordering(397) 00:11:06.426 fused_ordering(398) 00:11:06.426 fused_ordering(399) 00:11:06.426 fused_ordering(400) 00:11:06.426 fused_ordering(401) 00:11:06.426 fused_ordering(402) 00:11:06.426 fused_ordering(403) 00:11:06.426 fused_ordering(404) 00:11:06.426 fused_ordering(405) 00:11:06.426 fused_ordering(406) 00:11:06.426 fused_ordering(407) 00:11:06.426 fused_ordering(408) 00:11:06.426 fused_ordering(409) 00:11:06.426 fused_ordering(410) 00:11:06.997 fused_ordering(411) 00:11:06.997 fused_ordering(412) 00:11:06.997 fused_ordering(413) 00:11:06.997 fused_ordering(414) 00:11:06.997 fused_ordering(415) 00:11:06.997 fused_ordering(416) 00:11:06.997 fused_ordering(417) 00:11:06.997 fused_ordering(418) 00:11:06.997 fused_ordering(419) 00:11:06.997 fused_ordering(420) 00:11:06.997 fused_ordering(421) 00:11:06.997 fused_ordering(422) 00:11:06.997 fused_ordering(423) 00:11:06.997 fused_ordering(424) 00:11:06.997 fused_ordering(425) 00:11:06.997 fused_ordering(426) 00:11:06.997 fused_ordering(427) 00:11:06.997 fused_ordering(428) 00:11:06.997 fused_ordering(429) 00:11:06.997 fused_ordering(430) 00:11:06.997 fused_ordering(431) 00:11:06.997 fused_ordering(432) 00:11:06.997 fused_ordering(433) 00:11:06.997 fused_ordering(434) 00:11:06.997 fused_ordering(435) 00:11:06.997 fused_ordering(436) 00:11:06.997 fused_ordering(437) 00:11:06.997 fused_ordering(438) 00:11:06.997 fused_ordering(439) 00:11:06.997 fused_ordering(440) 00:11:06.997 fused_ordering(441) 00:11:06.997 fused_ordering(442) 00:11:06.997 fused_ordering(443) 00:11:06.997 fused_ordering(444) 00:11:06.997 fused_ordering(445) 00:11:06.997 fused_ordering(446) 00:11:06.997 fused_ordering(447) 00:11:06.997 fused_ordering(448) 00:11:06.997 fused_ordering(449) 00:11:06.997 fused_ordering(450) 00:11:06.997 fused_ordering(451) 00:11:06.997 fused_ordering(452) 00:11:06.997 fused_ordering(453) 00:11:06.997 fused_ordering(454) 00:11:06.997 fused_ordering(455) 00:11:06.997 fused_ordering(456) 00:11:06.997 fused_ordering(457) 00:11:06.997 fused_ordering(458) 00:11:06.997 fused_ordering(459) 00:11:06.997 fused_ordering(460) 00:11:06.997 fused_ordering(461) 00:11:06.997 fused_ordering(462) 00:11:06.997 fused_ordering(463) 00:11:06.997 fused_ordering(464) 00:11:06.997 fused_ordering(465) 00:11:06.997 fused_ordering(466) 00:11:06.997 fused_ordering(467) 00:11:06.997 fused_ordering(468) 00:11:06.997 fused_ordering(469) 00:11:06.997 fused_ordering(470) 00:11:06.997 fused_ordering(471) 00:11:06.997 fused_ordering(472) 00:11:06.997 fused_ordering(473) 00:11:06.997 fused_ordering(474) 00:11:06.997 fused_ordering(475) 00:11:06.997 fused_ordering(476) 00:11:06.997 fused_ordering(477) 00:11:06.997 fused_ordering(478) 00:11:06.997 fused_ordering(479) 00:11:06.997 fused_ordering(480) 00:11:06.997 fused_ordering(481) 00:11:06.997 fused_ordering(482) 00:11:06.997 fused_ordering(483) 00:11:06.997 fused_ordering(484) 00:11:06.997 fused_ordering(485) 00:11:06.997 fused_ordering(486) 00:11:06.997 fused_ordering(487) 00:11:06.997 fused_ordering(488) 00:11:06.998 fused_ordering(489) 00:11:06.998 fused_ordering(490) 00:11:06.998 fused_ordering(491) 00:11:06.998 fused_ordering(492) 00:11:06.998 fused_ordering(493) 00:11:06.998 fused_ordering(494) 00:11:06.998 fused_ordering(495) 00:11:06.998 fused_ordering(496) 00:11:06.998 fused_ordering(497) 00:11:06.998 fused_ordering(498) 00:11:06.998 fused_ordering(499) 00:11:06.998 fused_ordering(500) 00:11:06.998 fused_ordering(501) 00:11:06.998 fused_ordering(502) 00:11:06.998 fused_ordering(503) 00:11:06.998 fused_ordering(504) 00:11:06.998 fused_ordering(505) 00:11:06.998 fused_ordering(506) 00:11:06.998 fused_ordering(507) 00:11:06.998 fused_ordering(508) 00:11:06.998 fused_ordering(509) 00:11:06.998 fused_ordering(510) 00:11:06.998 fused_ordering(511) 00:11:06.998 fused_ordering(512) 00:11:06.998 fused_ordering(513) 00:11:06.998 fused_ordering(514) 00:11:06.998 fused_ordering(515) 00:11:06.998 fused_ordering(516) 00:11:06.998 fused_ordering(517) 00:11:06.998 fused_ordering(518) 00:11:06.998 fused_ordering(519) 00:11:06.998 fused_ordering(520) 00:11:06.998 fused_ordering(521) 00:11:06.998 fused_ordering(522) 00:11:06.998 fused_ordering(523) 00:11:06.998 fused_ordering(524) 00:11:06.998 fused_ordering(525) 00:11:06.998 fused_ordering(526) 00:11:06.998 fused_ordering(527) 00:11:06.998 fused_ordering(528) 00:11:06.998 fused_ordering(529) 00:11:06.998 fused_ordering(530) 00:11:06.998 fused_ordering(531) 00:11:06.998 fused_ordering(532) 00:11:06.998 fused_ordering(533) 00:11:06.998 fused_ordering(534) 00:11:06.998 fused_ordering(535) 00:11:06.998 fused_ordering(536) 00:11:06.998 fused_ordering(537) 00:11:06.998 fused_ordering(538) 00:11:06.998 fused_ordering(539) 00:11:06.998 fused_ordering(540) 00:11:06.998 fused_ordering(541) 00:11:06.998 fused_ordering(542) 00:11:06.998 fused_ordering(543) 00:11:06.998 fused_ordering(544) 00:11:06.998 fused_ordering(545) 00:11:06.998 fused_ordering(546) 00:11:06.998 fused_ordering(547) 00:11:06.998 fused_ordering(548) 00:11:06.998 fused_ordering(549) 00:11:06.998 fused_ordering(550) 00:11:06.998 fused_ordering(551) 00:11:06.998 fused_ordering(552) 00:11:06.998 fused_ordering(553) 00:11:06.998 fused_ordering(554) 00:11:06.998 fused_ordering(555) 00:11:06.998 fused_ordering(556) 00:11:06.998 fused_ordering(557) 00:11:06.998 fused_ordering(558) 00:11:06.998 fused_ordering(559) 00:11:06.998 fused_ordering(560) 00:11:06.998 fused_ordering(561) 00:11:06.998 fused_ordering(562) 00:11:06.998 fused_ordering(563) 00:11:06.998 fused_ordering(564) 00:11:06.998 fused_ordering(565) 00:11:06.998 fused_ordering(566) 00:11:06.998 fused_ordering(567) 00:11:06.998 fused_ordering(568) 00:11:06.998 fused_ordering(569) 00:11:06.998 fused_ordering(570) 00:11:06.998 fused_ordering(571) 00:11:06.998 fused_ordering(572) 00:11:06.998 fused_ordering(573) 00:11:06.998 fused_ordering(574) 00:11:06.998 fused_ordering(575) 00:11:06.998 fused_ordering(576) 00:11:06.998 fused_ordering(577) 00:11:06.998 fused_ordering(578) 00:11:06.998 fused_ordering(579) 00:11:06.998 fused_ordering(580) 00:11:06.998 fused_ordering(581) 00:11:06.998 fused_ordering(582) 00:11:06.998 fused_ordering(583) 00:11:06.998 fused_ordering(584) 00:11:06.998 fused_ordering(585) 00:11:06.998 fused_ordering(586) 00:11:06.998 fused_ordering(587) 00:11:06.998 fused_ordering(588) 00:11:06.998 fused_ordering(589) 00:11:06.998 fused_ordering(590) 00:11:06.998 fused_ordering(591) 00:11:06.998 fused_ordering(592) 00:11:06.998 fused_ordering(593) 00:11:06.998 fused_ordering(594) 00:11:06.998 fused_ordering(595) 00:11:06.998 fused_ordering(596) 00:11:06.998 fused_ordering(597) 00:11:06.998 fused_ordering(598) 00:11:06.998 fused_ordering(599) 00:11:06.998 fused_ordering(600) 00:11:06.998 fused_ordering(601) 00:11:06.998 fused_ordering(602) 00:11:06.998 fused_ordering(603) 00:11:06.998 fused_ordering(604) 00:11:06.998 fused_ordering(605) 00:11:06.998 fused_ordering(606) 00:11:06.998 fused_ordering(607) 00:11:06.998 fused_ordering(608) 00:11:06.998 fused_ordering(609) 00:11:06.998 fused_ordering(610) 00:11:06.998 fused_ordering(611) 00:11:06.998 fused_ordering(612) 00:11:06.998 fused_ordering(613) 00:11:06.998 fused_ordering(614) 00:11:06.998 fused_ordering(615) 00:11:07.569 fused_ordering(616) 00:11:07.569 fused_ordering(617) 00:11:07.569 fused_ordering(618) 00:11:07.569 fused_ordering(619) 00:11:07.569 fused_ordering(620) 00:11:07.569 fused_ordering(621) 00:11:07.569 fused_ordering(622) 00:11:07.569 fused_ordering(623) 00:11:07.569 fused_ordering(624) 00:11:07.569 fused_ordering(625) 00:11:07.569 fused_ordering(626) 00:11:07.569 fused_ordering(627) 00:11:07.569 fused_ordering(628) 00:11:07.569 fused_ordering(629) 00:11:07.569 fused_ordering(630) 00:11:07.569 fused_ordering(631) 00:11:07.569 fused_ordering(632) 00:11:07.569 fused_ordering(633) 00:11:07.569 fused_ordering(634) 00:11:07.569 fused_ordering(635) 00:11:07.569 fused_ordering(636) 00:11:07.569 fused_ordering(637) 00:11:07.569 fused_ordering(638) 00:11:07.569 fused_ordering(639) 00:11:07.569 fused_ordering(640) 00:11:07.569 fused_ordering(641) 00:11:07.569 fused_ordering(642) 00:11:07.569 fused_ordering(643) 00:11:07.569 fused_ordering(644) 00:11:07.569 fused_ordering(645) 00:11:07.569 fused_ordering(646) 00:11:07.569 fused_ordering(647) 00:11:07.569 fused_ordering(648) 00:11:07.569 fused_ordering(649) 00:11:07.569 fused_ordering(650) 00:11:07.569 fused_ordering(651) 00:11:07.569 fused_ordering(652) 00:11:07.569 fused_ordering(653) 00:11:07.569 fused_ordering(654) 00:11:07.569 fused_ordering(655) 00:11:07.569 fused_ordering(656) 00:11:07.569 fused_ordering(657) 00:11:07.569 fused_ordering(658) 00:11:07.569 fused_ordering(659) 00:11:07.569 fused_ordering(660) 00:11:07.569 fused_ordering(661) 00:11:07.569 fused_ordering(662) 00:11:07.569 fused_ordering(663) 00:11:07.569 fused_ordering(664) 00:11:07.569 fused_ordering(665) 00:11:07.569 fused_ordering(666) 00:11:07.569 fused_ordering(667) 00:11:07.569 fused_ordering(668) 00:11:07.569 fused_ordering(669) 00:11:07.569 fused_ordering(670) 00:11:07.569 fused_ordering(671) 00:11:07.569 fused_ordering(672) 00:11:07.569 fused_ordering(673) 00:11:07.569 fused_ordering(674) 00:11:07.569 fused_ordering(675) 00:11:07.569 fused_ordering(676) 00:11:07.569 fused_ordering(677) 00:11:07.569 fused_ordering(678) 00:11:07.569 fused_ordering(679) 00:11:07.569 fused_ordering(680) 00:11:07.569 fused_ordering(681) 00:11:07.569 fused_ordering(682) 00:11:07.569 fused_ordering(683) 00:11:07.569 fused_ordering(684) 00:11:07.569 fused_ordering(685) 00:11:07.569 fused_ordering(686) 00:11:07.569 fused_ordering(687) 00:11:07.569 fused_ordering(688) 00:11:07.569 fused_ordering(689) 00:11:07.569 fused_ordering(690) 00:11:07.569 fused_ordering(691) 00:11:07.569 fused_ordering(692) 00:11:07.569 fused_ordering(693) 00:11:07.569 fused_ordering(694) 00:11:07.569 fused_ordering(695) 00:11:07.569 fused_ordering(696) 00:11:07.569 fused_ordering(697) 00:11:07.569 fused_ordering(698) 00:11:07.569 fused_ordering(699) 00:11:07.569 fused_ordering(700) 00:11:07.569 fused_ordering(701) 00:11:07.569 fused_ordering(702) 00:11:07.569 fused_ordering(703) 00:11:07.569 fused_ordering(704) 00:11:07.569 fused_ordering(705) 00:11:07.569 fused_ordering(706) 00:11:07.569 fused_ordering(707) 00:11:07.569 fused_ordering(708) 00:11:07.569 fused_ordering(709) 00:11:07.569 fused_ordering(710) 00:11:07.569 fused_ordering(711) 00:11:07.569 fused_ordering(712) 00:11:07.569 fused_ordering(713) 00:11:07.569 fused_ordering(714) 00:11:07.569 fused_ordering(715) 00:11:07.569 fused_ordering(716) 00:11:07.569 fused_ordering(717) 00:11:07.569 fused_ordering(718) 00:11:07.569 fused_ordering(719) 00:11:07.569 fused_ordering(720) 00:11:07.569 fused_ordering(721) 00:11:07.569 fused_ordering(722) 00:11:07.569 fused_ordering(723) 00:11:07.569 fused_ordering(724) 00:11:07.569 fused_ordering(725) 00:11:07.569 fused_ordering(726) 00:11:07.569 fused_ordering(727) 00:11:07.569 fused_ordering(728) 00:11:07.569 fused_ordering(729) 00:11:07.569 fused_ordering(730) 00:11:07.569 fused_ordering(731) 00:11:07.569 fused_ordering(732) 00:11:07.569 fused_ordering(733) 00:11:07.569 fused_ordering(734) 00:11:07.569 fused_ordering(735) 00:11:07.569 fused_ordering(736) 00:11:07.569 fused_ordering(737) 00:11:07.569 fused_ordering(738) 00:11:07.569 fused_ordering(739) 00:11:07.569 fused_ordering(740) 00:11:07.569 fused_ordering(741) 00:11:07.569 fused_ordering(742) 00:11:07.569 fused_ordering(743) 00:11:07.569 fused_ordering(744) 00:11:07.569 fused_ordering(745) 00:11:07.569 fused_ordering(746) 00:11:07.569 fused_ordering(747) 00:11:07.569 fused_ordering(748) 00:11:07.569 fused_ordering(749) 00:11:07.569 fused_ordering(750) 00:11:07.569 fused_ordering(751) 00:11:07.569 fused_ordering(752) 00:11:07.569 fused_ordering(753) 00:11:07.569 fused_ordering(754) 00:11:07.569 fused_ordering(755) 00:11:07.569 fused_ordering(756) 00:11:07.569 fused_ordering(757) 00:11:07.570 fused_ordering(758) 00:11:07.570 fused_ordering(759) 00:11:07.570 fused_ordering(760) 00:11:07.570 fused_ordering(761) 00:11:07.570 fused_ordering(762) 00:11:07.570 fused_ordering(763) 00:11:07.570 fused_ordering(764) 00:11:07.570 fused_ordering(765) 00:11:07.570 fused_ordering(766) 00:11:07.570 fused_ordering(767) 00:11:07.570 fused_ordering(768) 00:11:07.570 fused_ordering(769) 00:11:07.570 fused_ordering(770) 00:11:07.570 fused_ordering(771) 00:11:07.570 fused_ordering(772) 00:11:07.570 fused_ordering(773) 00:11:07.570 fused_ordering(774) 00:11:07.570 fused_ordering(775) 00:11:07.570 fused_ordering(776) 00:11:07.570 fused_ordering(777) 00:11:07.570 fused_ordering(778) 00:11:07.570 fused_ordering(779) 00:11:07.570 fused_ordering(780) 00:11:07.570 fused_ordering(781) 00:11:07.570 fused_ordering(782) 00:11:07.570 fused_ordering(783) 00:11:07.570 fused_ordering(784) 00:11:07.570 fused_ordering(785) 00:11:07.570 fused_ordering(786) 00:11:07.570 fused_ordering(787) 00:11:07.570 fused_ordering(788) 00:11:07.570 fused_ordering(789) 00:11:07.570 fused_ordering(790) 00:11:07.570 fused_ordering(791) 00:11:07.570 fused_ordering(792) 00:11:07.570 fused_ordering(793) 00:11:07.570 fused_ordering(794) 00:11:07.570 fused_ordering(795) 00:11:07.570 fused_ordering(796) 00:11:07.570 fused_ordering(797) 00:11:07.570 fused_ordering(798) 00:11:07.570 fused_ordering(799) 00:11:07.570 fused_ordering(800) 00:11:07.570 fused_ordering(801) 00:11:07.570 fused_ordering(802) 00:11:07.570 fused_ordering(803) 00:11:07.570 fused_ordering(804) 00:11:07.570 fused_ordering(805) 00:11:07.570 fused_ordering(806) 00:11:07.570 fused_ordering(807) 00:11:07.570 fused_ordering(808) 00:11:07.570 fused_ordering(809) 00:11:07.570 fused_ordering(810) 00:11:07.570 fused_ordering(811) 00:11:07.570 fused_ordering(812) 00:11:07.570 fused_ordering(813) 00:11:07.570 fused_ordering(814) 00:11:07.570 fused_ordering(815) 00:11:07.570 fused_ordering(816) 00:11:07.570 fused_ordering(817) 00:11:07.570 fused_ordering(818) 00:11:07.570 fused_ordering(819) 00:11:07.570 fused_ordering(820) 00:11:08.141 fused_ordering(821) 00:11:08.141 fused_ordering(822) 00:11:08.141 fused_ordering(823) 00:11:08.141 fused_ordering(824) 00:11:08.141 fused_ordering(825) 00:11:08.141 fused_ordering(826) 00:11:08.141 fused_ordering(827) 00:11:08.141 fused_ordering(828) 00:11:08.141 fused_ordering(829) 00:11:08.141 fused_ordering(830) 00:11:08.141 fused_ordering(831) 00:11:08.141 fused_ordering(832) 00:11:08.141 fused_ordering(833) 00:11:08.141 fused_ordering(834) 00:11:08.141 fused_ordering(835) 00:11:08.141 fused_ordering(836) 00:11:08.141 fused_ordering(837) 00:11:08.141 fused_ordering(838) 00:11:08.141 fused_ordering(839) 00:11:08.141 fused_ordering(840) 00:11:08.141 fused_ordering(841) 00:11:08.141 fused_ordering(842) 00:11:08.141 fused_ordering(843) 00:11:08.141 fused_ordering(844) 00:11:08.141 fused_ordering(845) 00:11:08.141 fused_ordering(846) 00:11:08.141 fused_ordering(847) 00:11:08.141 fused_ordering(848) 00:11:08.141 fused_ordering(849) 00:11:08.141 fused_ordering(850) 00:11:08.141 fused_ordering(851) 00:11:08.141 fused_ordering(852) 00:11:08.141 fused_ordering(853) 00:11:08.141 fused_ordering(854) 00:11:08.141 fused_ordering(855) 00:11:08.141 fused_ordering(856) 00:11:08.141 fused_ordering(857) 00:11:08.141 fused_ordering(858) 00:11:08.141 fused_ordering(859) 00:11:08.141 fused_ordering(860) 00:11:08.141 fused_ordering(861) 00:11:08.141 fused_ordering(862) 00:11:08.141 fused_ordering(863) 00:11:08.141 fused_ordering(864) 00:11:08.141 fused_ordering(865) 00:11:08.141 fused_ordering(866) 00:11:08.141 fused_ordering(867) 00:11:08.141 fused_ordering(868) 00:11:08.141 fused_ordering(869) 00:11:08.141 fused_ordering(870) 00:11:08.141 fused_ordering(871) 00:11:08.141 fused_ordering(872) 00:11:08.141 fused_ordering(873) 00:11:08.141 fused_ordering(874) 00:11:08.141 fused_ordering(875) 00:11:08.141 fused_ordering(876) 00:11:08.141 fused_ordering(877) 00:11:08.141 fused_ordering(878) 00:11:08.141 fused_ordering(879) 00:11:08.141 fused_ordering(880) 00:11:08.141 fused_ordering(881) 00:11:08.141 fused_ordering(882) 00:11:08.141 fused_ordering(883) 00:11:08.141 fused_ordering(884) 00:11:08.141 fused_ordering(885) 00:11:08.141 fused_ordering(886) 00:11:08.141 fused_ordering(887) 00:11:08.141 fused_ordering(888) 00:11:08.141 fused_ordering(889) 00:11:08.141 fused_ordering(890) 00:11:08.141 fused_ordering(891) 00:11:08.141 fused_ordering(892) 00:11:08.141 fused_ordering(893) 00:11:08.141 fused_ordering(894) 00:11:08.141 fused_ordering(895) 00:11:08.141 fused_ordering(896) 00:11:08.141 fused_ordering(897) 00:11:08.141 fused_ordering(898) 00:11:08.141 fused_ordering(899) 00:11:08.141 fused_ordering(900) 00:11:08.141 fused_ordering(901) 00:11:08.141 fused_ordering(902) 00:11:08.141 fused_ordering(903) 00:11:08.141 fused_ordering(904) 00:11:08.141 fused_ordering(905) 00:11:08.141 fused_ordering(906) 00:11:08.141 fused_ordering(907) 00:11:08.141 fused_ordering(908) 00:11:08.141 fused_ordering(909) 00:11:08.141 fused_ordering(910) 00:11:08.141 fused_ordering(911) 00:11:08.141 fused_ordering(912) 00:11:08.141 fused_ordering(913) 00:11:08.141 fused_ordering(914) 00:11:08.141 fused_ordering(915) 00:11:08.141 fused_ordering(916) 00:11:08.141 fused_ordering(917) 00:11:08.141 fused_ordering(918) 00:11:08.141 fused_ordering(919) 00:11:08.141 fused_ordering(920) 00:11:08.141 fused_ordering(921) 00:11:08.141 fused_ordering(922) 00:11:08.141 fused_ordering(923) 00:11:08.141 fused_ordering(924) 00:11:08.141 fused_ordering(925) 00:11:08.141 fused_ordering(926) 00:11:08.141 fused_ordering(927) 00:11:08.141 fused_ordering(928) 00:11:08.141 fused_ordering(929) 00:11:08.141 fused_ordering(930) 00:11:08.141 fused_ordering(931) 00:11:08.141 fused_ordering(932) 00:11:08.141 fused_ordering(933) 00:11:08.141 fused_ordering(934) 00:11:08.141 fused_ordering(935) 00:11:08.141 fused_ordering(936) 00:11:08.141 fused_ordering(937) 00:11:08.141 fused_ordering(938) 00:11:08.141 fused_ordering(939) 00:11:08.141 fused_ordering(940) 00:11:08.141 fused_ordering(941) 00:11:08.141 fused_ordering(942) 00:11:08.141 fused_ordering(943) 00:11:08.141 fused_ordering(944) 00:11:08.141 fused_ordering(945) 00:11:08.141 fused_ordering(946) 00:11:08.141 fused_ordering(947) 00:11:08.141 fused_ordering(948) 00:11:08.141 fused_ordering(949) 00:11:08.141 fused_ordering(950) 00:11:08.141 fused_ordering(951) 00:11:08.141 fused_ordering(952) 00:11:08.141 fused_ordering(953) 00:11:08.141 fused_ordering(954) 00:11:08.141 fused_ordering(955) 00:11:08.141 fused_ordering(956) 00:11:08.141 fused_ordering(957) 00:11:08.141 fused_ordering(958) 00:11:08.141 fused_ordering(959) 00:11:08.141 fused_ordering(960) 00:11:08.141 fused_ordering(961) 00:11:08.141 fused_ordering(962) 00:11:08.141 fused_ordering(963) 00:11:08.141 fused_ordering(964) 00:11:08.141 fused_ordering(965) 00:11:08.141 fused_ordering(966) 00:11:08.141 fused_ordering(967) 00:11:08.141 fused_ordering(968) 00:11:08.141 fused_ordering(969) 00:11:08.141 fused_ordering(970) 00:11:08.142 fused_ordering(971) 00:11:08.142 fused_ordering(972) 00:11:08.142 fused_ordering(973) 00:11:08.142 fused_ordering(974) 00:11:08.142 fused_ordering(975) 00:11:08.142 fused_ordering(976) 00:11:08.142 fused_ordering(977) 00:11:08.142 fused_ordering(978) 00:11:08.142 fused_ordering(979) 00:11:08.142 fused_ordering(980) 00:11:08.142 fused_ordering(981) 00:11:08.142 fused_ordering(982) 00:11:08.142 fused_ordering(983) 00:11:08.142 fused_ordering(984) 00:11:08.142 fused_ordering(985) 00:11:08.142 fused_ordering(986) 00:11:08.142 fused_ordering(987) 00:11:08.142 fused_ordering(988) 00:11:08.142 fused_ordering(989) 00:11:08.142 fused_ordering(990) 00:11:08.142 fused_ordering(991) 00:11:08.142 fused_ordering(992) 00:11:08.142 fused_ordering(993) 00:11:08.142 fused_ordering(994) 00:11:08.142 fused_ordering(995) 00:11:08.142 fused_ordering(996) 00:11:08.142 fused_ordering(997) 00:11:08.142 fused_ordering(998) 00:11:08.142 fused_ordering(999) 00:11:08.142 fused_ordering(1000) 00:11:08.142 fused_ordering(1001) 00:11:08.142 fused_ordering(1002) 00:11:08.142 fused_ordering(1003) 00:11:08.142 fused_ordering(1004) 00:11:08.142 fused_ordering(1005) 00:11:08.142 fused_ordering(1006) 00:11:08.142 fused_ordering(1007) 00:11:08.142 fused_ordering(1008) 00:11:08.142 fused_ordering(1009) 00:11:08.142 fused_ordering(1010) 00:11:08.142 fused_ordering(1011) 00:11:08.142 fused_ordering(1012) 00:11:08.142 fused_ordering(1013) 00:11:08.142 fused_ordering(1014) 00:11:08.142 fused_ordering(1015) 00:11:08.142 fused_ordering(1016) 00:11:08.142 fused_ordering(1017) 00:11:08.142 fused_ordering(1018) 00:11:08.142 fused_ordering(1019) 00:11:08.142 fused_ordering(1020) 00:11:08.142 fused_ordering(1021) 00:11:08.142 fused_ordering(1022) 00:11:08.142 fused_ordering(1023) 00:11:08.142 23:55:38 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:08.142 23:55:38 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:08.142 23:55:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:08.142 23:55:38 -- nvmf/common.sh@117 -- # sync 00:11:08.142 23:55:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.142 23:55:38 -- nvmf/common.sh@120 -- # set +e 00:11:08.142 23:55:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.142 23:55:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.142 rmmod nvme_tcp 00:11:08.142 rmmod nvme_fabrics 00:11:08.142 rmmod nvme_keyring 00:11:08.142 23:55:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.142 23:55:38 -- nvmf/common.sh@124 -- # set -e 00:11:08.142 23:55:38 -- nvmf/common.sh@125 -- # return 0 00:11:08.142 23:55:38 -- nvmf/common.sh@478 -- # '[' -n 288545 ']' 00:11:08.142 23:55:38 -- nvmf/common.sh@479 -- # killprocess 288545 00:11:08.142 23:55:38 -- common/autotest_common.sh@936 -- # '[' -z 288545 ']' 00:11:08.142 23:55:38 -- common/autotest_common.sh@940 -- # kill -0 288545 00:11:08.142 23:55:38 -- common/autotest_common.sh@941 -- # uname 00:11:08.142 23:55:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:08.142 23:55:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 288545 00:11:08.142 23:55:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:08.142 23:55:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:08.142 23:55:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 288545' 00:11:08.142 killing process with pid 288545 00:11:08.142 23:55:38 -- common/autotest_common.sh@955 -- # kill 288545 00:11:08.142 23:55:38 -- common/autotest_common.sh@960 -- # wait 288545 00:11:08.402 23:55:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:08.402 23:55:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:08.402 23:55:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:08.402 23:55:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.402 23:55:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.402 23:55:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.402 23:55:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.402 23:55:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.316 23:55:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.316 00:11:10.316 real 0m12.747s 00:11:10.316 user 0m6.832s 00:11:10.316 sys 0m6.604s 00:11:10.316 23:55:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:10.316 23:55:40 -- common/autotest_common.sh@10 -- # set +x 00:11:10.316 ************************************ 00:11:10.316 END TEST nvmf_fused_ordering 00:11:10.316 ************************************ 00:11:10.316 23:55:40 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:10.316 23:55:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:10.316 23:55:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.316 23:55:40 -- common/autotest_common.sh@10 -- # set +x 00:11:10.578 ************************************ 00:11:10.578 START TEST nvmf_delete_subsystem 00:11:10.578 ************************************ 00:11:10.578 23:55:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:10.578 * Looking for test storage... 00:11:10.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.578 23:55:40 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.578 23:55:40 -- nvmf/common.sh@7 -- # uname -s 00:11:10.578 23:55:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.578 23:55:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.578 23:55:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.578 23:55:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.578 23:55:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.578 23:55:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.578 23:55:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.578 23:55:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.578 23:55:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.578 23:55:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.578 23:55:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:10.578 23:55:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:10.578 23:55:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.578 23:55:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.578 23:55:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.578 23:55:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.578 23:55:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.578 23:55:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.578 23:55:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.578 23:55:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.578 23:55:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.578 23:55:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.578 23:55:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.578 23:55:40 -- paths/export.sh@5 -- # export PATH 00:11:10.578 23:55:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.578 23:55:40 -- nvmf/common.sh@47 -- # : 0 00:11:10.578 23:55:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.578 23:55:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.578 23:55:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.578 23:55:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.578 23:55:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.578 23:55:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.578 23:55:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.578 23:55:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.578 23:55:40 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:10.578 23:55:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:10.578 23:55:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.578 23:55:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:10.578 23:55:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:10.578 23:55:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:10.578 23:55:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.578 23:55:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.578 23:55:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.578 23:55:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:10.578 23:55:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:10.578 23:55:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:10.578 23:55:40 -- common/autotest_common.sh@10 -- # set +x 00:11:18.771 23:55:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:18.771 23:55:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:18.771 23:55:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:18.771 23:55:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:18.771 23:55:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:18.771 23:55:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:18.771 23:55:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:18.771 23:55:47 -- nvmf/common.sh@295 -- # net_devs=() 00:11:18.771 23:55:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:18.771 23:55:47 -- nvmf/common.sh@296 -- # e810=() 00:11:18.771 23:55:47 -- nvmf/common.sh@296 -- # local -ga e810 00:11:18.771 23:55:47 -- nvmf/common.sh@297 -- # x722=() 00:11:18.771 23:55:47 -- nvmf/common.sh@297 -- # local -ga x722 00:11:18.771 23:55:47 -- nvmf/common.sh@298 -- # mlx=() 00:11:18.771 23:55:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:18.771 23:55:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.771 23:55:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.771 23:55:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.771 23:55:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.771 23:55:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.771 23:55:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.771 23:55:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.771 23:55:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.771 23:55:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.771 23:55:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.771 23:55:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.771 23:55:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:18.771 23:55:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:18.771 23:55:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:18.771 23:55:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.771 23:55:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:18.771 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:18.771 23:55:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.771 23:55:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:18.771 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:18.771 23:55:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:18.771 23:55:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.771 23:55:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.771 23:55:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:18.771 23:55:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.771 23:55:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:18.771 Found net devices under 0000:31:00.0: cvl_0_0 00:11:18.771 23:55:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.771 23:55:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.771 23:55:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.771 23:55:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:18.771 23:55:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.771 23:55:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:18.771 Found net devices under 0000:31:00.1: cvl_0_1 00:11:18.771 23:55:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.771 23:55:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:18.771 23:55:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:18.771 23:55:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:18.771 23:55:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.771 23:55:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.771 23:55:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.771 23:55:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:18.771 23:55:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.771 23:55:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.771 23:55:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:18.771 23:55:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.771 23:55:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.771 23:55:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:18.771 23:55:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:18.771 23:55:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.771 23:55:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.771 23:55:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.771 23:55:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.771 23:55:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:18.771 23:55:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.771 23:55:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.771 23:55:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.771 23:55:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:18.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:11:18.771 00:11:18.771 --- 10.0.0.2 ping statistics --- 00:11:18.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.771 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:11:18.771 23:55:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:11:18.771 00:11:18.771 --- 10.0.0.1 ping statistics --- 00:11:18.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.771 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:11:18.771 23:55:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.771 23:55:47 -- nvmf/common.sh@411 -- # return 0 00:11:18.771 23:55:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:18.771 23:55:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.771 23:55:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:18.771 23:55:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.771 23:55:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:18.771 23:55:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:18.771 23:55:47 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:18.771 23:55:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:18.771 23:55:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:18.771 23:55:47 -- common/autotest_common.sh@10 -- # set +x 00:11:18.771 23:55:47 -- nvmf/common.sh@470 -- # nvmfpid=293322 00:11:18.771 23:55:47 -- nvmf/common.sh@471 -- # waitforlisten 293322 00:11:18.772 23:55:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:18.772 23:55:47 -- common/autotest_common.sh@817 -- # '[' -z 293322 ']' 00:11:18.772 23:55:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.772 23:55:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:18.772 23:55:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.772 23:55:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:18.772 23:55:47 -- common/autotest_common.sh@10 -- # set +x 00:11:18.772 [2024-04-26 23:55:47.903382] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:11:18.772 [2024-04-26 23:55:47.903454] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.772 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.772 [2024-04-26 23:55:47.977905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:18.772 [2024-04-26 23:55:48.052321] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.772 [2024-04-26 23:55:48.052364] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.772 [2024-04-26 23:55:48.052372] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.772 [2024-04-26 23:55:48.052379] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.772 [2024-04-26 23:55:48.052385] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.772 [2024-04-26 23:55:48.052500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.772 [2024-04-26 23:55:48.052502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.772 23:55:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:18.772 23:55:48 -- common/autotest_common.sh@850 -- # return 0 00:11:18.772 23:55:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:18.772 23:55:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:18.772 23:55:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.772 23:55:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.772 23:55:48 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.772 23:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.772 23:55:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.772 [2024-04-26 23:55:48.712357] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.772 23:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.772 23:55:48 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:18.772 23:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.772 23:55:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.772 23:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.772 23:55:48 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.772 23:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.772 23:55:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.772 [2024-04-26 23:55:48.736538] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.772 23:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.772 23:55:48 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:18.772 23:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.772 23:55:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.772 NULL1 00:11:18.772 23:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.772 23:55:48 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:18.772 23:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.772 23:55:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.772 Delay0 00:11:18.772 23:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.772 23:55:48 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.772 23:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.772 23:55:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.772 23:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.772 23:55:48 -- target/delete_subsystem.sh@28 -- # perf_pid=293667 00:11:18.772 23:55:48 -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:18.772 23:55:48 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:18.772 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.772 [2024-04-26 23:55:48.833208] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:20.686 23:55:50 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.686 23:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.686 23:55:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 [2024-04-26 23:55:50.958186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761780 is same with the state(5) to be set 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Write completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 starting I/O failed: -6 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 [2024-04-26 23:55:50.960934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc19000c3d0 is same with the state(5) to be set 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.948 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Write completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Write completed with error (sct=0, sc=8) 00:11:20.949 Write completed with error (sct=0, sc=8) 00:11:20.949 Write completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Write completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Write completed with error (sct=0, sc=8) 00:11:20.949 Write completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Write completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Write completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Read completed with error (sct=0, sc=8) 00:11:20.949 Write completed with error (sct=0, sc=8) 00:11:20.949 Write completed with error (sct=0, sc=8) 00:11:21.892 [2024-04-26 23:55:51.931555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x777c40 is same with the state(5) to be set 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 [2024-04-26 23:55:51.961606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761910 is same with the state(5) to be set 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 [2024-04-26 23:55:51.962328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761cd0 is same with the state(5) to be set 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 [2024-04-26 23:55:51.963537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc19000bf90 is same with the state(5) to be set 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Write completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 Read completed with error (sct=0, sc=8) 00:11:21.892 [2024-04-26 23:55:51.963632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc19000c690 is same with the state(5) to be set 00:11:21.892 [2024-04-26 23:55:51.964246] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x777c40 (9): Bad file descriptor 00:11:21.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:21.892 23:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.892 23:55:51 -- target/delete_subsystem.sh@34 -- # delay=0 00:11:21.892 23:55:51 -- target/delete_subsystem.sh@35 -- # kill -0 293667 00:11:21.892 23:55:51 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:21.892 Initializing NVMe Controllers 00:11:21.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.892 Controller IO queue size 128, less than required. 00:11:21.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:21.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:21.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:21.892 Initialization complete. Launching workers. 00:11:21.892 ======================================================== 00:11:21.892 Latency(us) 00:11:21.892 Device Information : IOPS MiB/s Average min max 00:11:21.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.76 0.08 894945.54 252.04 1006956.24 00:11:21.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.81 0.08 919538.25 184.39 1009396.83 00:11:21.892 ======================================================== 00:11:21.892 Total : 328.57 0.16 906832.02 184.39 1009396.83 00:11:21.892 00:11:22.466 23:55:52 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:22.466 23:55:52 -- target/delete_subsystem.sh@35 -- # kill -0 293667 00:11:22.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (293667) - No such process 00:11:22.466 23:55:52 -- target/delete_subsystem.sh@45 -- # NOT wait 293667 00:11:22.466 23:55:52 -- common/autotest_common.sh@638 -- # local es=0 00:11:22.466 23:55:52 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 293667 00:11:22.466 23:55:52 -- common/autotest_common.sh@626 -- # local arg=wait 00:11:22.466 23:55:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:22.466 23:55:52 -- common/autotest_common.sh@630 -- # type -t wait 00:11:22.466 23:55:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:22.466 23:55:52 -- common/autotest_common.sh@641 -- # wait 293667 00:11:22.466 23:55:52 -- common/autotest_common.sh@641 -- # es=1 00:11:22.466 23:55:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:22.466 23:55:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:22.466 23:55:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:22.466 23:55:52 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.466 23:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:22.466 23:55:52 -- common/autotest_common.sh@10 -- # set +x 00:11:22.466 23:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:22.466 23:55:52 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.466 23:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:22.466 23:55:52 -- common/autotest_common.sh@10 -- # set +x 00:11:22.466 [2024-04-26 23:55:52.495425] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.466 23:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:22.466 23:55:52 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.466 23:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:22.466 23:55:52 -- common/autotest_common.sh@10 -- # set +x 00:11:22.466 23:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:22.466 23:55:52 -- target/delete_subsystem.sh@54 -- # perf_pid=294352 00:11:22.466 23:55:52 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:22.466 23:55:52 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:22.466 23:55:52 -- target/delete_subsystem.sh@57 -- # kill -0 294352 00:11:22.466 23:55:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:22.466 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.466 [2024-04-26 23:55:52.564175] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:23.037 23:55:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.037 23:55:53 -- target/delete_subsystem.sh@57 -- # kill -0 294352 00:11:23.037 23:55:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.608 23:55:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.608 23:55:53 -- target/delete_subsystem.sh@57 -- # kill -0 294352 00:11:23.608 23:55:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.870 23:55:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.870 23:55:54 -- target/delete_subsystem.sh@57 -- # kill -0 294352 00:11:23.870 23:55:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.442 23:55:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.442 23:55:54 -- target/delete_subsystem.sh@57 -- # kill -0 294352 00:11:24.442 23:55:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.015 23:55:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.015 23:55:55 -- target/delete_subsystem.sh@57 -- # kill -0 294352 00:11:25.015 23:55:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.587 23:55:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.587 23:55:55 -- target/delete_subsystem.sh@57 -- # kill -0 294352 00:11:25.587 23:55:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.587 Initializing NVMe Controllers 00:11:25.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.587 Controller IO queue size 128, less than required. 00:11:25.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:25.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:25.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:25.587 Initialization complete. Launching workers. 00:11:25.587 ======================================================== 00:11:25.587 Latency(us) 00:11:25.587 Device Information : IOPS MiB/s Average min max 00:11:25.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002116.01 1000237.37 1005713.74 00:11:25.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003838.80 1000282.42 1042060.65 00:11:25.587 ======================================================== 00:11:25.587 Total : 256.00 0.12 1002977.40 1000237.37 1042060.65 00:11:25.587 00:11:25.848 23:55:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.848 23:55:56 -- target/delete_subsystem.sh@57 -- # kill -0 294352 00:11:25.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (294352) - No such process 00:11:25.848 23:55:56 -- target/delete_subsystem.sh@67 -- # wait 294352 00:11:25.848 23:55:56 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:25.848 23:55:56 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:25.848 23:55:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:25.848 23:55:56 -- nvmf/common.sh@117 -- # sync 00:11:25.848 23:55:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.848 23:55:56 -- nvmf/common.sh@120 -- # set +e 00:11:25.848 23:55:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.848 23:55:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.848 rmmod nvme_tcp 00:11:26.110 rmmod nvme_fabrics 00:11:26.110 rmmod nvme_keyring 00:11:26.110 23:55:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.110 23:55:56 -- nvmf/common.sh@124 -- # set -e 00:11:26.110 23:55:56 -- nvmf/common.sh@125 -- # return 0 00:11:26.110 23:55:56 -- nvmf/common.sh@478 -- # '[' -n 293322 ']' 00:11:26.110 23:55:56 -- nvmf/common.sh@479 -- # killprocess 293322 00:11:26.110 23:55:56 -- common/autotest_common.sh@936 -- # '[' -z 293322 ']' 00:11:26.110 23:55:56 -- common/autotest_common.sh@940 -- # kill -0 293322 00:11:26.110 23:55:56 -- common/autotest_common.sh@941 -- # uname 00:11:26.110 23:55:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:26.110 23:55:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 293322 00:11:26.110 23:55:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:26.110 23:55:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:26.110 23:55:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 293322' 00:11:26.110 killing process with pid 293322 00:11:26.110 23:55:56 -- common/autotest_common.sh@955 -- # kill 293322 00:11:26.110 23:55:56 -- common/autotest_common.sh@960 -- # wait 293322 00:11:26.110 23:55:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:26.110 23:55:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:26.110 23:55:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:26.110 23:55:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.110 23:55:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.110 23:55:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.110 23:55:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.110 23:55:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.661 23:55:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:28.661 00:11:28.661 real 0m17.725s 00:11:28.662 user 0m30.706s 00:11:28.662 sys 0m6.056s 00:11:28.662 23:55:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:28.662 23:55:58 -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 ************************************ 00:11:28.662 END TEST nvmf_delete_subsystem 00:11:28.662 ************************************ 00:11:28.662 23:55:58 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:28.662 23:55:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:28.662 23:55:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.662 23:55:58 -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 ************************************ 00:11:28.662 START TEST nvmf_ns_masking 00:11:28.662 ************************************ 00:11:28.662 23:55:58 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:28.662 * Looking for test storage... 00:11:28.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.662 23:55:58 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.662 23:55:58 -- nvmf/common.sh@7 -- # uname -s 00:11:28.662 23:55:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.662 23:55:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.662 23:55:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.662 23:55:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.662 23:55:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.662 23:55:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.662 23:55:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.662 23:55:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.662 23:55:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.662 23:55:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.662 23:55:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.662 23:55:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.662 23:55:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.662 23:55:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.662 23:55:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.662 23:55:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.662 23:55:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.662 23:55:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.662 23:55:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.662 23:55:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.662 23:55:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.662 23:55:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.662 23:55:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.662 23:55:58 -- paths/export.sh@5 -- # export PATH 00:11:28.662 23:55:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.662 23:55:58 -- nvmf/common.sh@47 -- # : 0 00:11:28.662 23:55:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.662 23:55:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.662 23:55:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.662 23:55:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.662 23:55:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.662 23:55:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.662 23:55:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.662 23:55:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.662 23:55:58 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:28.662 23:55:58 -- target/ns_masking.sh@11 -- # loops=5 00:11:28.662 23:55:58 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:28.662 23:55:58 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:28.662 23:55:58 -- target/ns_masking.sh@15 -- # uuidgen 00:11:28.662 23:55:58 -- target/ns_masking.sh@15 -- # HOSTID=b5abce41-2097-4987-8e5f-a181e2583414 00:11:28.662 23:55:58 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:28.662 23:55:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:28.662 23:55:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.662 23:55:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:28.662 23:55:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:28.662 23:55:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:28.662 23:55:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.662 23:55:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.662 23:55:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.662 23:55:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:28.662 23:55:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:28.662 23:55:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.662 23:55:58 -- common/autotest_common.sh@10 -- # set +x 00:11:35.256 23:56:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:35.256 23:56:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.256 23:56:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.256 23:56:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.256 23:56:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.256 23:56:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.256 23:56:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.256 23:56:05 -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.256 23:56:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.256 23:56:05 -- nvmf/common.sh@296 -- # e810=() 00:11:35.256 23:56:05 -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.256 23:56:05 -- nvmf/common.sh@297 -- # x722=() 00:11:35.256 23:56:05 -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.256 23:56:05 -- nvmf/common.sh@298 -- # mlx=() 00:11:35.256 23:56:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.256 23:56:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.256 23:56:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.256 23:56:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.256 23:56:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.256 23:56:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.256 23:56:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.256 23:56:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.256 23:56:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.256 23:56:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.256 23:56:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.256 23:56:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.256 23:56:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.256 23:56:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.256 23:56:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.256 23:56:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.256 23:56:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:35.256 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:35.256 23:56:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.256 23:56:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:35.256 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:35.256 23:56:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.256 23:56:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.256 23:56:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.256 23:56:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:35.256 23:56:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.256 23:56:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:35.256 Found net devices under 0000:31:00.0: cvl_0_0 00:11:35.256 23:56:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.256 23:56:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.256 23:56:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.256 23:56:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:35.256 23:56:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.256 23:56:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:35.256 Found net devices under 0000:31:00.1: cvl_0_1 00:11:35.256 23:56:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.256 23:56:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:35.256 23:56:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:35.256 23:56:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:35.256 23:56:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:35.256 23:56:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.256 23:56:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.256 23:56:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.256 23:56:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.256 23:56:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.256 23:56:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.256 23:56:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.256 23:56:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.256 23:56:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.256 23:56:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.256 23:56:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.256 23:56:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.256 23:56:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.256 23:56:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.256 23:56:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.256 23:56:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.256 23:56:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.518 23:56:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.518 23:56:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.518 23:56:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:11:35.518 00:11:35.518 --- 10.0.0.2 ping statistics --- 00:11:35.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.518 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:11:35.518 23:56:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:11:35.518 00:11:35.518 --- 10.0.0.1 ping statistics --- 00:11:35.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.518 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:11:35.518 23:56:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.518 23:56:05 -- nvmf/common.sh@411 -- # return 0 00:11:35.518 23:56:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:35.518 23:56:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.518 23:56:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:35.518 23:56:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:35.518 23:56:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.518 23:56:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:35.518 23:56:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:35.518 23:56:05 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:35.518 23:56:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:35.518 23:56:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:35.518 23:56:05 -- common/autotest_common.sh@10 -- # set +x 00:11:35.518 23:56:05 -- nvmf/common.sh@470 -- # nvmfpid=299246 00:11:35.518 23:56:05 -- nvmf/common.sh@471 -- # waitforlisten 299246 00:11:35.518 23:56:05 -- common/autotest_common.sh@817 -- # '[' -z 299246 ']' 00:11:35.518 23:56:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.518 23:56:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:35.518 23:56:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.518 23:56:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:35.518 23:56:05 -- common/autotest_common.sh@10 -- # set +x 00:11:35.518 23:56:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.518 [2024-04-26 23:56:05.650910] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:11:35.518 [2024-04-26 23:56:05.650974] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.518 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.518 [2024-04-26 23:56:05.722717] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.779 [2024-04-26 23:56:05.797693] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.779 [2024-04-26 23:56:05.797734] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.779 [2024-04-26 23:56:05.797743] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.779 [2024-04-26 23:56:05.797750] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.779 [2024-04-26 23:56:05.797755] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.779 [2024-04-26 23:56:05.797867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.779 [2024-04-26 23:56:05.797941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.779 [2024-04-26 23:56:05.798091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.779 [2024-04-26 23:56:05.798091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.350 23:56:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:36.350 23:56:06 -- common/autotest_common.sh@850 -- # return 0 00:11:36.350 23:56:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:36.350 23:56:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:36.350 23:56:06 -- common/autotest_common.sh@10 -- # set +x 00:11:36.350 23:56:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.350 23:56:06 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:36.612 [2024-04-26 23:56:06.602839] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.612 23:56:06 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:36.612 23:56:06 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:36.612 23:56:06 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:36.612 Malloc1 00:11:36.612 23:56:06 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:36.874 Malloc2 00:11:36.874 23:56:06 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.135 23:56:07 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:37.135 23:56:07 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.396 [2024-04-26 23:56:07.467597] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.396 23:56:07 -- target/ns_masking.sh@61 -- # connect 00:11:37.396 23:56:07 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b5abce41-2097-4987-8e5f-a181e2583414 -a 10.0.0.2 -s 4420 -i 4 00:11:37.657 23:56:07 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.657 23:56:07 -- common/autotest_common.sh@1184 -- # local i=0 00:11:37.657 23:56:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.657 23:56:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:37.657 23:56:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:39.574 23:56:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:39.574 23:56:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:39.574 23:56:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.574 23:56:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:39.574 23:56:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.574 23:56:09 -- common/autotest_common.sh@1194 -- # return 0 00:11:39.574 23:56:09 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:39.574 23:56:09 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:39.574 23:56:09 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:39.574 23:56:09 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:39.574 23:56:09 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:39.574 23:56:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:39.574 23:56:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:39.574 [ 0]:0x1 00:11:39.574 23:56:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:39.574 23:56:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:39.574 23:56:09 -- target/ns_masking.sh@40 -- # nguid=d9e0b3d37e264ac4a3d88158086ef902 00:11:39.574 23:56:09 -- target/ns_masking.sh@41 -- # [[ d9e0b3d37e264ac4a3d88158086ef902 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.574 23:56:09 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:39.835 23:56:09 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:39.835 23:56:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:39.835 23:56:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:39.835 [ 0]:0x1 00:11:39.835 23:56:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:39.835 23:56:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:39.835 23:56:09 -- target/ns_masking.sh@40 -- # nguid=d9e0b3d37e264ac4a3d88158086ef902 00:11:39.835 23:56:09 -- target/ns_masking.sh@41 -- # [[ d9e0b3d37e264ac4a3d88158086ef902 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.835 23:56:09 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:39.835 23:56:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:39.835 23:56:09 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:39.835 [ 1]:0x2 00:11:39.835 23:56:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:39.835 23:56:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:39.835 23:56:10 -- target/ns_masking.sh@40 -- # nguid=300b0405a74548819325e36fe0d35e71 00:11:39.835 23:56:10 -- target/ns_masking.sh@41 -- # [[ 300b0405a74548819325e36fe0d35e71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.835 23:56:10 -- target/ns_masking.sh@69 -- # disconnect 00:11:39.835 23:56:10 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.097 23:56:10 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.097 23:56:10 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:40.359 23:56:10 -- target/ns_masking.sh@77 -- # connect 1 00:11:40.359 23:56:10 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b5abce41-2097-4987-8e5f-a181e2583414 -a 10.0.0.2 -s 4420 -i 4 00:11:40.621 23:56:10 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:40.621 23:56:10 -- common/autotest_common.sh@1184 -- # local i=0 00:11:40.621 23:56:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.621 23:56:10 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:11:40.621 23:56:10 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:11:40.621 23:56:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:42.540 23:56:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:42.540 23:56:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:42.540 23:56:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.540 23:56:12 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:42.540 23:56:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.540 23:56:12 -- common/autotest_common.sh@1194 -- # return 0 00:11:42.540 23:56:12 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:42.540 23:56:12 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:42.540 23:56:12 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:42.540 23:56:12 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:42.540 23:56:12 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:42.540 23:56:12 -- common/autotest_common.sh@638 -- # local es=0 00:11:42.540 23:56:12 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:42.540 23:56:12 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:42.540 23:56:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:42.540 23:56:12 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:42.540 23:56:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:42.540 23:56:12 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:42.540 23:56:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:42.540 23:56:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:42.800 23:56:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.800 23:56:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:42.800 23:56:12 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:42.800 23:56:12 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.800 23:56:12 -- common/autotest_common.sh@641 -- # es=1 00:11:42.800 23:56:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:42.800 23:56:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:42.800 23:56:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:42.800 23:56:12 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:42.800 23:56:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:42.800 23:56:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:42.800 [ 0]:0x2 00:11:42.800 23:56:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.800 23:56:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:42.800 23:56:12 -- target/ns_masking.sh@40 -- # nguid=300b0405a74548819325e36fe0d35e71 00:11:42.800 23:56:12 -- target/ns_masking.sh@41 -- # [[ 300b0405a74548819325e36fe0d35e71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.800 23:56:12 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.061 23:56:13 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:43.061 23:56:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.061 23:56:13 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.061 [ 0]:0x1 00:11:43.061 23:56:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.061 23:56:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.061 23:56:13 -- target/ns_masking.sh@40 -- # nguid=d9e0b3d37e264ac4a3d88158086ef902 00:11:43.061 23:56:13 -- target/ns_masking.sh@41 -- # [[ d9e0b3d37e264ac4a3d88158086ef902 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.061 23:56:13 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:43.061 23:56:13 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.061 23:56:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.061 [ 1]:0x2 00:11:43.061 23:56:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.061 23:56:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.061 23:56:13 -- target/ns_masking.sh@40 -- # nguid=300b0405a74548819325e36fe0d35e71 00:11:43.061 23:56:13 -- target/ns_masking.sh@41 -- # [[ 300b0405a74548819325e36fe0d35e71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.061 23:56:13 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.322 23:56:13 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:43.322 23:56:13 -- common/autotest_common.sh@638 -- # local es=0 00:11:43.322 23:56:13 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:43.322 23:56:13 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:43.322 23:56:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.322 23:56:13 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:43.322 23:56:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.323 23:56:13 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:43.323 23:56:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.323 23:56:13 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.323 23:56:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.323 23:56:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.323 23:56:13 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:43.323 23:56:13 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.323 23:56:13 -- common/autotest_common.sh@641 -- # es=1 00:11:43.323 23:56:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:43.323 23:56:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:43.323 23:56:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:43.323 23:56:13 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:43.323 23:56:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.323 23:56:13 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.323 [ 0]:0x2 00:11:43.323 23:56:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.323 23:56:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.323 23:56:13 -- target/ns_masking.sh@40 -- # nguid=300b0405a74548819325e36fe0d35e71 00:11:43.323 23:56:13 -- target/ns_masking.sh@41 -- # [[ 300b0405a74548819325e36fe0d35e71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.323 23:56:13 -- target/ns_masking.sh@91 -- # disconnect 00:11:43.323 23:56:13 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.323 23:56:13 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.585 23:56:13 -- target/ns_masking.sh@95 -- # connect 2 00:11:43.585 23:56:13 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b5abce41-2097-4987-8e5f-a181e2583414 -a 10.0.0.2 -s 4420 -i 4 00:11:43.846 23:56:13 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:43.846 23:56:13 -- common/autotest_common.sh@1184 -- # local i=0 00:11:43.846 23:56:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.846 23:56:13 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:43.846 23:56:13 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:43.846 23:56:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:45.788 23:56:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:45.788 23:56:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:45.788 23:56:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.788 23:56:15 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:45.788 23:56:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.788 23:56:15 -- common/autotest_common.sh@1194 -- # return 0 00:11:45.788 23:56:15 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:45.788 23:56:15 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:45.788 23:56:15 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:45.788 23:56:15 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:45.788 23:56:15 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:45.788 23:56:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:45.788 23:56:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:45.788 [ 0]:0x1 00:11:45.788 23:56:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:45.788 23:56:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.048 23:56:16 -- target/ns_masking.sh@40 -- # nguid=d9e0b3d37e264ac4a3d88158086ef902 00:11:46.048 23:56:16 -- target/ns_masking.sh@41 -- # [[ d9e0b3d37e264ac4a3d88158086ef902 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.048 23:56:16 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:46.048 23:56:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.048 23:56:16 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:46.048 [ 1]:0x2 00:11:46.048 23:56:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.048 23:56:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.048 23:56:16 -- target/ns_masking.sh@40 -- # nguid=300b0405a74548819325e36fe0d35e71 00:11:46.048 23:56:16 -- target/ns_masking.sh@41 -- # [[ 300b0405a74548819325e36fe0d35e71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.048 23:56:16 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:46.048 23:56:16 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:46.048 23:56:16 -- common/autotest_common.sh@638 -- # local es=0 00:11:46.048 23:56:16 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:46.048 23:56:16 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:46.048 23:56:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.048 23:56:16 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:46.048 23:56:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.048 23:56:16 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:46.048 23:56:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.048 23:56:16 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:46.048 23:56:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.048 23:56:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.309 23:56:16 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:46.309 23:56:16 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.309 23:56:16 -- common/autotest_common.sh@641 -- # es=1 00:11:46.309 23:56:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:46.309 23:56:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:46.309 23:56:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:46.309 23:56:16 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:46.309 23:56:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.309 23:56:16 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:46.309 [ 0]:0x2 00:11:46.309 23:56:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.309 23:56:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.309 23:56:16 -- target/ns_masking.sh@40 -- # nguid=300b0405a74548819325e36fe0d35e71 00:11:46.309 23:56:16 -- target/ns_masking.sh@41 -- # [[ 300b0405a74548819325e36fe0d35e71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.309 23:56:16 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.309 23:56:16 -- common/autotest_common.sh@638 -- # local es=0 00:11:46.309 23:56:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.309 23:56:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.309 23:56:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.310 23:56:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.310 23:56:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.310 23:56:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.310 23:56:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.310 23:56:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.310 23:56:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:46.310 23:56:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.310 [2024-04-26 23:56:16.492356] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:46.310 request: 00:11:46.310 { 00:11:46.310 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.310 "nsid": 2, 00:11:46.310 "host": "nqn.2016-06.io.spdk:host1", 00:11:46.310 "method": "nvmf_ns_remove_host", 00:11:46.310 "req_id": 1 00:11:46.310 } 00:11:46.310 Got JSON-RPC error response 00:11:46.310 response: 00:11:46.310 { 00:11:46.310 "code": -32602, 00:11:46.310 "message": "Invalid parameters" 00:11:46.310 } 00:11:46.310 23:56:16 -- common/autotest_common.sh@641 -- # es=1 00:11:46.310 23:56:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:46.310 23:56:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:46.310 23:56:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:46.310 23:56:16 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:46.310 23:56:16 -- common/autotest_common.sh@638 -- # local es=0 00:11:46.310 23:56:16 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:46.310 23:56:16 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:46.310 23:56:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.310 23:56:16 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:46.310 23:56:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.310 23:56:16 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:46.570 23:56:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.570 23:56:16 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:46.570 23:56:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.570 23:56:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.570 23:56:16 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:46.570 23:56:16 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.570 23:56:16 -- common/autotest_common.sh@641 -- # es=1 00:11:46.570 23:56:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:46.570 23:56:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:46.570 23:56:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:46.570 23:56:16 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:46.570 23:56:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.570 23:56:16 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:46.570 [ 0]:0x2 00:11:46.570 23:56:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.570 23:56:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.570 23:56:16 -- target/ns_masking.sh@40 -- # nguid=300b0405a74548819325e36fe0d35e71 00:11:46.570 23:56:16 -- target/ns_masking.sh@41 -- # [[ 300b0405a74548819325e36fe0d35e71 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.570 23:56:16 -- target/ns_masking.sh@108 -- # disconnect 00:11:46.570 23:56:16 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.570 23:56:16 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.830 23:56:16 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:46.830 23:56:16 -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:46.830 23:56:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:46.830 23:56:16 -- nvmf/common.sh@117 -- # sync 00:11:46.830 23:56:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:46.830 23:56:16 -- nvmf/common.sh@120 -- # set +e 00:11:46.830 23:56:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.830 23:56:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:46.830 rmmod nvme_tcp 00:11:46.830 rmmod nvme_fabrics 00:11:46.830 rmmod nvme_keyring 00:11:46.830 23:56:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.830 23:56:16 -- nvmf/common.sh@124 -- # set -e 00:11:46.830 23:56:16 -- nvmf/common.sh@125 -- # return 0 00:11:46.830 23:56:16 -- nvmf/common.sh@478 -- # '[' -n 299246 ']' 00:11:46.830 23:56:16 -- nvmf/common.sh@479 -- # killprocess 299246 00:11:46.830 23:56:16 -- common/autotest_common.sh@936 -- # '[' -z 299246 ']' 00:11:46.830 23:56:16 -- common/autotest_common.sh@940 -- # kill -0 299246 00:11:46.830 23:56:16 -- common/autotest_common.sh@941 -- # uname 00:11:46.830 23:56:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:46.830 23:56:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 299246 00:11:46.830 23:56:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:46.830 23:56:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:46.830 23:56:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 299246' 00:11:46.830 killing process with pid 299246 00:11:46.830 23:56:16 -- common/autotest_common.sh@955 -- # kill 299246 00:11:46.830 23:56:16 -- common/autotest_common.sh@960 -- # wait 299246 00:11:47.090 23:56:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:47.090 23:56:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:47.090 23:56:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:47.090 23:56:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.090 23:56:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:47.090 23:56:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.090 23:56:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.090 23:56:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.998 23:56:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.998 00:11:48.998 real 0m20.633s 00:11:48.998 user 0m49.584s 00:11:48.998 sys 0m6.666s 00:11:48.998 23:56:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.998 23:56:19 -- common/autotest_common.sh@10 -- # set +x 00:11:48.998 ************************************ 00:11:48.998 END TEST nvmf_ns_masking 00:11:48.998 ************************************ 00:11:49.258 23:56:19 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:49.258 23:56:19 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:49.258 23:56:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:49.258 23:56:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:49.258 23:56:19 -- common/autotest_common.sh@10 -- # set +x 00:11:49.258 ************************************ 00:11:49.258 START TEST nvmf_nvme_cli 00:11:49.258 ************************************ 00:11:49.258 23:56:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:49.519 * Looking for test storage... 00:11:49.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.519 23:56:19 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.519 23:56:19 -- nvmf/common.sh@7 -- # uname -s 00:11:49.519 23:56:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.519 23:56:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.519 23:56:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.519 23:56:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.519 23:56:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.519 23:56:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.519 23:56:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.519 23:56:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.519 23:56:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.519 23:56:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.519 23:56:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:49.519 23:56:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:49.519 23:56:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.519 23:56:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.519 23:56:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.519 23:56:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.519 23:56:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.519 23:56:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.519 23:56:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.519 23:56:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.519 23:56:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.519 23:56:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.519 23:56:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.519 23:56:19 -- paths/export.sh@5 -- # export PATH 00:11:49.519 23:56:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.519 23:56:19 -- nvmf/common.sh@47 -- # : 0 00:11:49.519 23:56:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.519 23:56:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.519 23:56:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.519 23:56:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.519 23:56:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.519 23:56:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.519 23:56:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.519 23:56:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.519 23:56:19 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.519 23:56:19 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.519 23:56:19 -- target/nvme_cli.sh@14 -- # devs=() 00:11:49.519 23:56:19 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:49.519 23:56:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:49.519 23:56:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.519 23:56:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:49.519 23:56:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:49.519 23:56:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:49.519 23:56:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.519 23:56:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.519 23:56:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.519 23:56:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:49.519 23:56:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:49.519 23:56:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.519 23:56:19 -- common/autotest_common.sh@10 -- # set +x 00:11:56.104 23:56:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:56.104 23:56:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:56.104 23:56:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:56.104 23:56:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:56.104 23:56:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:56.104 23:56:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:56.104 23:56:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:56.104 23:56:26 -- nvmf/common.sh@295 -- # net_devs=() 00:11:56.104 23:56:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:56.104 23:56:26 -- nvmf/common.sh@296 -- # e810=() 00:11:56.104 23:56:26 -- nvmf/common.sh@296 -- # local -ga e810 00:11:56.104 23:56:26 -- nvmf/common.sh@297 -- # x722=() 00:11:56.104 23:56:26 -- nvmf/common.sh@297 -- # local -ga x722 00:11:56.104 23:56:26 -- nvmf/common.sh@298 -- # mlx=() 00:11:56.104 23:56:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:56.104 23:56:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.104 23:56:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.104 23:56:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.104 23:56:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.104 23:56:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.104 23:56:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.104 23:56:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.104 23:56:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.104 23:56:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.104 23:56:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.104 23:56:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.104 23:56:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:56.104 23:56:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:56.104 23:56:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:56.104 23:56:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.104 23:56:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:56.104 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:56.104 23:56:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.104 23:56:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:56.104 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:56.104 23:56:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:56.104 23:56:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.104 23:56:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.104 23:56:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:56.104 23:56:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.104 23:56:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:56.104 Found net devices under 0000:31:00.0: cvl_0_0 00:11:56.104 23:56:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.104 23:56:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.104 23:56:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.104 23:56:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:56.104 23:56:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.104 23:56:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:56.104 Found net devices under 0000:31:00.1: cvl_0_1 00:11:56.104 23:56:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.104 23:56:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:56.104 23:56:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:56.104 23:56:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:56.104 23:56:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:56.104 23:56:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.104 23:56:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.104 23:56:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.104 23:56:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:56.104 23:56:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.104 23:56:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.104 23:56:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:56.104 23:56:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.104 23:56:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.104 23:56:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:56.104 23:56:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:56.366 23:56:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.366 23:56:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.366 23:56:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.366 23:56:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.366 23:56:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:56.366 23:56:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.628 23:56:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.628 23:56:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.628 23:56:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:56.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:11:56.628 00:11:56.628 --- 10.0.0.2 ping statistics --- 00:11:56.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.628 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:11:56.628 23:56:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:11:56.628 00:11:56.628 --- 10.0.0.1 ping statistics --- 00:11:56.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.628 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:11:56.628 23:56:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.628 23:56:26 -- nvmf/common.sh@411 -- # return 0 00:11:56.628 23:56:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:56.628 23:56:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.628 23:56:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:56.628 23:56:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:56.628 23:56:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.628 23:56:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:56.628 23:56:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:56.628 23:56:26 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:56.628 23:56:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:56.628 23:56:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:56.628 23:56:26 -- common/autotest_common.sh@10 -- # set +x 00:11:56.628 23:56:26 -- nvmf/common.sh@470 -- # nvmfpid=305987 00:11:56.628 23:56:26 -- nvmf/common.sh@471 -- # waitforlisten 305987 00:11:56.628 23:56:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.628 23:56:26 -- common/autotest_common.sh@817 -- # '[' -z 305987 ']' 00:11:56.628 23:56:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.628 23:56:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:56.628 23:56:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.628 23:56:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:56.628 23:56:26 -- common/autotest_common.sh@10 -- # set +x 00:11:56.628 [2024-04-26 23:56:26.741992] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:11:56.628 [2024-04-26 23:56:26.742042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.628 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.628 [2024-04-26 23:56:26.811047] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.890 [2024-04-26 23:56:26.875091] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.890 [2024-04-26 23:56:26.875131] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.890 [2024-04-26 23:56:26.875142] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.890 [2024-04-26 23:56:26.875149] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.890 [2024-04-26 23:56:26.875155] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.890 [2024-04-26 23:56:26.875261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.890 [2024-04-26 23:56:26.875394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.890 [2024-04-26 23:56:26.875550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.890 [2024-04-26 23:56:26.875551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.890 23:56:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:56.890 23:56:26 -- common/autotest_common.sh@850 -- # return 0 00:11:56.890 23:56:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:56.890 23:56:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:56.890 23:56:26 -- common/autotest_common.sh@10 -- # set +x 00:11:56.890 23:56:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.890 23:56:27 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.890 23:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.890 23:56:27 -- common/autotest_common.sh@10 -- # set +x 00:11:56.890 [2024-04-26 23:56:27.024707] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.890 23:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.890 23:56:27 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:56.890 23:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.890 23:56:27 -- common/autotest_common.sh@10 -- # set +x 00:11:56.890 Malloc0 00:11:56.890 23:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.890 23:56:27 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:56.890 23:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.890 23:56:27 -- common/autotest_common.sh@10 -- # set +x 00:11:56.890 Malloc1 00:11:56.890 23:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.890 23:56:27 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:56.890 23:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.890 23:56:27 -- common/autotest_common.sh@10 -- # set +x 00:11:56.890 23:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.890 23:56:27 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:56.890 23:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.890 23:56:27 -- common/autotest_common.sh@10 -- # set +x 00:11:56.890 23:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.890 23:56:27 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.890 23:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.890 23:56:27 -- common/autotest_common.sh@10 -- # set +x 00:11:56.890 23:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.890 23:56:27 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.890 23:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.890 23:56:27 -- common/autotest_common.sh@10 -- # set +x 00:11:57.152 [2024-04-26 23:56:27.114629] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.152 23:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:57.152 23:56:27 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:57.152 23:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:57.152 23:56:27 -- common/autotest_common.sh@10 -- # set +x 00:11:57.152 23:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:57.152 23:56:27 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:11:57.152 00:11:57.152 Discovery Log Number of Records 2, Generation counter 2 00:11:57.152 =====Discovery Log Entry 0====== 00:11:57.152 trtype: tcp 00:11:57.152 adrfam: ipv4 00:11:57.152 subtype: current discovery subsystem 00:11:57.152 treq: not required 00:11:57.152 portid: 0 00:11:57.152 trsvcid: 4420 00:11:57.152 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:57.152 traddr: 10.0.0.2 00:11:57.152 eflags: explicit discovery connections, duplicate discovery information 00:11:57.152 sectype: none 00:11:57.152 =====Discovery Log Entry 1====== 00:11:57.152 trtype: tcp 00:11:57.152 adrfam: ipv4 00:11:57.152 subtype: nvme subsystem 00:11:57.152 treq: not required 00:11:57.152 portid: 0 00:11:57.152 trsvcid: 4420 00:11:57.152 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:57.152 traddr: 10.0.0.2 00:11:57.152 eflags: none 00:11:57.152 sectype: none 00:11:57.152 23:56:27 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:57.152 23:56:27 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:57.152 23:56:27 -- nvmf/common.sh@511 -- # local dev _ 00:11:57.152 23:56:27 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:57.152 23:56:27 -- nvmf/common.sh@510 -- # nvme list 00:11:57.152 23:56:27 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:57.152 23:56:27 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:57.152 23:56:27 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:57.152 23:56:27 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:57.152 23:56:27 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:57.152 23:56:27 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.075 23:56:28 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:59.075 23:56:28 -- common/autotest_common.sh@1184 -- # local i=0 00:11:59.075 23:56:28 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.075 23:56:28 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:59.075 23:56:28 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:59.075 23:56:28 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:01.082 23:56:30 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:01.082 23:56:30 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:01.082 23:56:30 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.082 23:56:30 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:01.082 23:56:30 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.082 23:56:30 -- common/autotest_common.sh@1194 -- # return 0 00:12:01.082 23:56:30 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:01.082 23:56:30 -- nvmf/common.sh@511 -- # local dev _ 00:12:01.082 23:56:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.082 23:56:30 -- nvmf/common.sh@510 -- # nvme list 00:12:01.082 23:56:30 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:01.082 23:56:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.082 23:56:30 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:01.082 23:56:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.082 23:56:30 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:01.082 23:56:30 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:01.082 23:56:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.082 23:56:30 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:01.082 23:56:30 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:01.082 23:56:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.082 23:56:30 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:01.082 /dev/nvme0n1 ]] 00:12:01.082 23:56:30 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:01.082 23:56:30 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:01.082 23:56:30 -- nvmf/common.sh@511 -- # local dev _ 00:12:01.082 23:56:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.082 23:56:30 -- nvmf/common.sh@510 -- # nvme list 00:12:01.082 23:56:30 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:01.082 23:56:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.082 23:56:30 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:01.082 23:56:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.082 23:56:30 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:01.082 23:56:30 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:01.082 23:56:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.082 23:56:30 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:01.082 23:56:30 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:01.082 23:56:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:01.082 23:56:30 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:01.082 23:56:30 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.082 23:56:30 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.082 23:56:30 -- common/autotest_common.sh@1205 -- # local i=0 00:12:01.082 23:56:30 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:01.082 23:56:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.082 23:56:30 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:01.082 23:56:30 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.082 23:56:30 -- common/autotest_common.sh@1217 -- # return 0 00:12:01.082 23:56:30 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:01.082 23:56:30 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.082 23:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.082 23:56:30 -- common/autotest_common.sh@10 -- # set +x 00:12:01.082 23:56:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.082 23:56:31 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:01.082 23:56:31 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:01.082 23:56:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:01.082 23:56:31 -- nvmf/common.sh@117 -- # sync 00:12:01.082 23:56:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.082 23:56:31 -- nvmf/common.sh@120 -- # set +e 00:12:01.082 23:56:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.082 23:56:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.082 rmmod nvme_tcp 00:12:01.083 rmmod nvme_fabrics 00:12:01.083 rmmod nvme_keyring 00:12:01.083 23:56:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.083 23:56:31 -- nvmf/common.sh@124 -- # set -e 00:12:01.083 23:56:31 -- nvmf/common.sh@125 -- # return 0 00:12:01.083 23:56:31 -- nvmf/common.sh@478 -- # '[' -n 305987 ']' 00:12:01.083 23:56:31 -- nvmf/common.sh@479 -- # killprocess 305987 00:12:01.083 23:56:31 -- common/autotest_common.sh@936 -- # '[' -z 305987 ']' 00:12:01.083 23:56:31 -- common/autotest_common.sh@940 -- # kill -0 305987 00:12:01.083 23:56:31 -- common/autotest_common.sh@941 -- # uname 00:12:01.083 23:56:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:01.083 23:56:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 305987 00:12:01.083 23:56:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:01.083 23:56:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:01.083 23:56:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 305987' 00:12:01.083 killing process with pid 305987 00:12:01.083 23:56:31 -- common/autotest_common.sh@955 -- # kill 305987 00:12:01.083 23:56:31 -- common/autotest_common.sh@960 -- # wait 305987 00:12:01.083 23:56:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:01.083 23:56:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:01.083 23:56:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:01.083 23:56:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.083 23:56:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.083 23:56:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.083 23:56:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.083 23:56:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.631 23:56:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:03.631 00:12:03.631 real 0m13.948s 00:12:03.631 user 0m19.621s 00:12:03.631 sys 0m5.790s 00:12:03.631 23:56:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:03.631 23:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:03.631 ************************************ 00:12:03.631 END TEST nvmf_nvme_cli 00:12:03.631 ************************************ 00:12:03.631 23:56:33 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:03.631 23:56:33 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:03.631 23:56:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:03.631 23:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:03.631 23:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:03.631 ************************************ 00:12:03.631 START TEST nvmf_vfio_user 00:12:03.631 ************************************ 00:12:03.631 23:56:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:03.631 * Looking for test storage... 00:12:03.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.631 23:56:33 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.631 23:56:33 -- nvmf/common.sh@7 -- # uname -s 00:12:03.631 23:56:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.631 23:56:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.631 23:56:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.631 23:56:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.631 23:56:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.631 23:56:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.631 23:56:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.631 23:56:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.631 23:56:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.631 23:56:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.631 23:56:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:03.631 23:56:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:03.631 23:56:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.631 23:56:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.631 23:56:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.631 23:56:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.631 23:56:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.631 23:56:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.631 23:56:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.631 23:56:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.631 23:56:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.632 23:56:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.632 23:56:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.632 23:56:33 -- paths/export.sh@5 -- # export PATH 00:12:03.632 23:56:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.632 23:56:33 -- nvmf/common.sh@47 -- # : 0 00:12:03.632 23:56:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.632 23:56:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.632 23:56:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.632 23:56:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.632 23:56:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.632 23:56:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.632 23:56:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.632 23:56:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=307474 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 307474' 00:12:03.632 Process pid: 307474 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 307474 00:12:03.632 23:56:33 -- common/autotest_common.sh@817 -- # '[' -z 307474 ']' 00:12:03.632 23:56:33 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:03.632 23:56:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.632 23:56:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:03.632 23:56:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.632 23:56:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:03.632 23:56:33 -- common/autotest_common.sh@10 -- # set +x 00:12:03.632 [2024-04-26 23:56:33.752283] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:12:03.632 [2024-04-26 23:56:33.752350] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.632 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.632 [2024-04-26 23:56:33.820957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.893 [2024-04-26 23:56:33.893524] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.893 [2024-04-26 23:56:33.893567] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.893 [2024-04-26 23:56:33.893575] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.893 [2024-04-26 23:56:33.893581] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.893 [2024-04-26 23:56:33.893587] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.893 [2024-04-26 23:56:33.893706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.893 [2024-04-26 23:56:33.893856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.893 [2024-04-26 23:56:33.893961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.893 [2024-04-26 23:56:33.893961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.466 23:56:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:04.466 23:56:34 -- common/autotest_common.sh@850 -- # return 0 00:12:04.466 23:56:34 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:05.411 23:56:35 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:05.673 23:56:35 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:05.673 23:56:35 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:05.673 23:56:35 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:05.673 23:56:35 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:05.673 23:56:35 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:05.673 Malloc1 00:12:05.936 23:56:35 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:05.936 23:56:36 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:06.198 23:56:36 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:06.198 23:56:36 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:06.198 23:56:36 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:06.198 23:56:36 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:06.460 Malloc2 00:12:06.460 23:56:36 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:06.722 23:56:36 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:06.722 23:56:36 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:06.986 23:56:37 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:06.986 23:56:37 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:06.986 23:56:37 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:06.986 23:56:37 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:06.986 23:56:37 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:06.986 23:56:37 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:06.986 [2024-04-26 23:56:37.083956] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:12:06.986 [2024-04-26 23:56:37.084011] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308168 ] 00:12:06.986 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.986 [2024-04-26 23:56:37.113457] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:06.986 [2024-04-26 23:56:37.122124] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:06.986 [2024-04-26 23:56:37.122144] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0070f46000 00:12:06.986 [2024-04-26 23:56:37.123121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:06.986 [2024-04-26 23:56:37.124129] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:06.986 [2024-04-26 23:56:37.125141] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:06.986 [2024-04-26 23:56:37.126142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:06.986 [2024-04-26 23:56:37.127166] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:06.986 [2024-04-26 23:56:37.128162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:06.986 [2024-04-26 23:56:37.129163] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:06.986 [2024-04-26 23:56:37.130171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:06.986 [2024-04-26 23:56:37.131183] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:06.986 [2024-04-26 23:56:37.131195] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0070f3b000 00:12:06.986 [2024-04-26 23:56:37.132521] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:06.986 [2024-04-26 23:56:37.149447] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:06.986 [2024-04-26 23:56:37.149477] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:06.986 [2024-04-26 23:56:37.154337] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:06.986 [2024-04-26 23:56:37.154387] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:06.986 [2024-04-26 23:56:37.154477] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:06.986 [2024-04-26 23:56:37.154497] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:06.986 [2024-04-26 23:56:37.154502] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:06.986 [2024-04-26 23:56:37.155328] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:06.986 [2024-04-26 23:56:37.155337] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:06.986 [2024-04-26 23:56:37.155344] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:06.986 [2024-04-26 23:56:37.156336] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:06.986 [2024-04-26 23:56:37.156344] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:06.986 [2024-04-26 23:56:37.156351] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:06.986 [2024-04-26 23:56:37.157337] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:06.986 [2024-04-26 23:56:37.157345] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:06.986 [2024-04-26 23:56:37.158342] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:06.986 [2024-04-26 23:56:37.158350] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:06.986 [2024-04-26 23:56:37.158355] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:06.986 [2024-04-26 23:56:37.158362] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:06.986 [2024-04-26 23:56:37.158467] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:06.986 [2024-04-26 23:56:37.158472] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:06.986 [2024-04-26 23:56:37.158480] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:06.986 [2024-04-26 23:56:37.159355] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:06.986 [2024-04-26 23:56:37.160360] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:06.986 [2024-04-26 23:56:37.161362] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:06.986 [2024-04-26 23:56:37.162358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:06.986 [2024-04-26 23:56:37.162425] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:06.986 [2024-04-26 23:56:37.163368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:06.986 [2024-04-26 23:56:37.163376] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:06.986 [2024-04-26 23:56:37.163381] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163402] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:06.987 [2024-04-26 23:56:37.163410] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163426] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:06.987 [2024-04-26 23:56:37.163431] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:06.987 [2024-04-26 23:56:37.163446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:06.987 [2024-04-26 23:56:37.163495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:06.987 [2024-04-26 23:56:37.163505] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:06.987 [2024-04-26 23:56:37.163511] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:06.987 [2024-04-26 23:56:37.163516] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:06.987 [2024-04-26 23:56:37.163521] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:06.987 [2024-04-26 23:56:37.163528] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:06.987 [2024-04-26 23:56:37.163532] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:06.987 [2024-04-26 23:56:37.163537] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163545] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:06.987 [2024-04-26 23:56:37.163564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:06.987 [2024-04-26 23:56:37.163578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.987 [2024-04-26 23:56:37.163588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.987 [2024-04-26 23:56:37.163597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.987 [2024-04-26 23:56:37.163605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.987 [2024-04-26 23:56:37.163609] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163618] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:06.987 [2024-04-26 23:56:37.163640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:06.987 [2024-04-26 23:56:37.163645] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:06.987 [2024-04-26 23:56:37.163650] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163660] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163666] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:06.987 [2024-04-26 23:56:37.163688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:06.987 [2024-04-26 23:56:37.163735] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163743] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163751] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:06.987 [2024-04-26 23:56:37.163755] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:06.987 [2024-04-26 23:56:37.163761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:06.987 [2024-04-26 23:56:37.163776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:06.987 [2024-04-26 23:56:37.163785] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:06.987 [2024-04-26 23:56:37.163793] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163801] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163808] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:06.987 [2024-04-26 23:56:37.163812] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:06.987 [2024-04-26 23:56:37.163818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:06.987 [2024-04-26 23:56:37.163839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:06.987 [2024-04-26 23:56:37.163851] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163859] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163866] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:06.987 [2024-04-26 23:56:37.163870] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:06.987 [2024-04-26 23:56:37.163876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:06.987 [2024-04-26 23:56:37.163889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:06.987 [2024-04-26 23:56:37.163897] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163904] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163912] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163918] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163923] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163928] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:06.987 [2024-04-26 23:56:37.163932] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:06.987 [2024-04-26 23:56:37.163937] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:06.987 [2024-04-26 23:56:37.163954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:06.987 [2024-04-26 23:56:37.163966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:06.987 [2024-04-26 23:56:37.163977] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:06.987 [2024-04-26 23:56:37.163993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:06.987 [2024-04-26 23:56:37.164004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:06.987 [2024-04-26 23:56:37.164015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:06.987 [2024-04-26 23:56:37.164026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:06.987 [2024-04-26 23:56:37.164037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:06.987 [2024-04-26 23:56:37.164047] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:06.987 [2024-04-26 23:56:37.164051] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:06.987 [2024-04-26 23:56:37.164055] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:06.987 [2024-04-26 23:56:37.164060] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:06.987 [2024-04-26 23:56:37.164067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:06.987 [2024-04-26 23:56:37.164074] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:06.987 [2024-04-26 23:56:37.164078] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:06.987 [2024-04-26 23:56:37.164084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:06.988 [2024-04-26 23:56:37.164091] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:06.988 [2024-04-26 23:56:37.164096] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:06.988 [2024-04-26 23:56:37.164102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:06.988 [2024-04-26 23:56:37.164109] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:06.988 [2024-04-26 23:56:37.164113] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:06.988 [2024-04-26 23:56:37.164119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:06.988 [2024-04-26 23:56:37.164126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:06.988 [2024-04-26 23:56:37.164138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:06.988 [2024-04-26 23:56:37.164147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:06.988 [2024-04-26 23:56:37.164154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:06.988 ===================================================== 00:12:06.988 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:06.988 ===================================================== 00:12:06.988 Controller Capabilities/Features 00:12:06.988 ================================ 00:12:06.988 Vendor ID: 4e58 00:12:06.988 Subsystem Vendor ID: 4e58 00:12:06.988 Serial Number: SPDK1 00:12:06.988 Model Number: SPDK bdev Controller 00:12:06.988 Firmware Version: 24.05 00:12:06.988 Recommended Arb Burst: 6 00:12:06.988 IEEE OUI Identifier: 8d 6b 50 00:12:06.988 Multi-path I/O 00:12:06.988 May have multiple subsystem ports: Yes 00:12:06.988 May have multiple controllers: Yes 00:12:06.988 Associated with SR-IOV VF: No 00:12:06.988 Max Data Transfer Size: 131072 00:12:06.988 Max Number of Namespaces: 32 00:12:06.988 Max Number of I/O Queues: 127 00:12:06.988 NVMe Specification Version (VS): 1.3 00:12:06.988 NVMe Specification Version (Identify): 1.3 00:12:06.988 Maximum Queue Entries: 256 00:12:06.988 Contiguous Queues Required: Yes 00:12:06.988 Arbitration Mechanisms Supported 00:12:06.988 Weighted Round Robin: Not Supported 00:12:06.988 Vendor Specific: Not Supported 00:12:06.988 Reset Timeout: 15000 ms 00:12:06.988 Doorbell Stride: 4 bytes 00:12:06.988 NVM Subsystem Reset: Not Supported 00:12:06.988 Command Sets Supported 00:12:06.988 NVM Command Set: Supported 00:12:06.988 Boot Partition: Not Supported 00:12:06.988 Memory Page Size Minimum: 4096 bytes 00:12:06.988 Memory Page Size Maximum: 4096 bytes 00:12:06.988 Persistent Memory Region: Not Supported 00:12:06.988 Optional Asynchronous Events Supported 00:12:06.988 Namespace Attribute Notices: Supported 00:12:06.988 Firmware Activation Notices: Not Supported 00:12:06.988 ANA Change Notices: Not Supported 00:12:06.988 PLE Aggregate Log Change Notices: Not Supported 00:12:06.988 LBA Status Info Alert Notices: Not Supported 00:12:06.988 EGE Aggregate Log Change Notices: Not Supported 00:12:06.988 Normal NVM Subsystem Shutdown event: Not Supported 00:12:06.988 Zone Descriptor Change Notices: Not Supported 00:12:06.988 Discovery Log Change Notices: Not Supported 00:12:06.988 Controller Attributes 00:12:06.988 128-bit Host Identifier: Supported 00:12:06.988 Non-Operational Permissive Mode: Not Supported 00:12:06.988 NVM Sets: Not Supported 00:12:06.988 Read Recovery Levels: Not Supported 00:12:06.988 Endurance Groups: Not Supported 00:12:06.988 Predictable Latency Mode: Not Supported 00:12:06.988 Traffic Based Keep ALive: Not Supported 00:12:06.988 Namespace Granularity: Not Supported 00:12:06.988 SQ Associations: Not Supported 00:12:06.988 UUID List: Not Supported 00:12:06.988 Multi-Domain Subsystem: Not Supported 00:12:06.988 Fixed Capacity Management: Not Supported 00:12:06.988 Variable Capacity Management: Not Supported 00:12:06.988 Delete Endurance Group: Not Supported 00:12:06.988 Delete NVM Set: Not Supported 00:12:06.988 Extended LBA Formats Supported: Not Supported 00:12:06.988 Flexible Data Placement Supported: Not Supported 00:12:06.988 00:12:06.988 Controller Memory Buffer Support 00:12:06.988 ================================ 00:12:06.988 Supported: No 00:12:06.988 00:12:06.988 Persistent Memory Region Support 00:12:06.988 ================================ 00:12:06.988 Supported: No 00:12:06.988 00:12:06.988 Admin Command Set Attributes 00:12:06.988 ============================ 00:12:06.988 Security Send/Receive: Not Supported 00:12:06.988 Format NVM: Not Supported 00:12:06.988 Firmware Activate/Download: Not Supported 00:12:06.988 Namespace Management: Not Supported 00:12:06.988 Device Self-Test: Not Supported 00:12:06.988 Directives: Not Supported 00:12:06.988 NVMe-MI: Not Supported 00:12:06.988 Virtualization Management: Not Supported 00:12:06.988 Doorbell Buffer Config: Not Supported 00:12:06.988 Get LBA Status Capability: Not Supported 00:12:06.988 Command & Feature Lockdown Capability: Not Supported 00:12:06.988 Abort Command Limit: 4 00:12:06.988 Async Event Request Limit: 4 00:12:06.988 Number of Firmware Slots: N/A 00:12:06.988 Firmware Slot 1 Read-Only: N/A 00:12:06.988 Firmware Activation Without Reset: N/A 00:12:06.988 Multiple Update Detection Support: N/A 00:12:06.988 Firmware Update Granularity: No Information Provided 00:12:06.988 Per-Namespace SMART Log: No 00:12:06.988 Asymmetric Namespace Access Log Page: Not Supported 00:12:06.988 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:06.988 Command Effects Log Page: Supported 00:12:06.988 Get Log Page Extended Data: Supported 00:12:06.988 Telemetry Log Pages: Not Supported 00:12:06.988 Persistent Event Log Pages: Not Supported 00:12:06.988 Supported Log Pages Log Page: May Support 00:12:06.988 Commands Supported & Effects Log Page: Not Supported 00:12:06.988 Feature Identifiers & Effects Log Page:May Support 00:12:06.988 NVMe-MI Commands & Effects Log Page: May Support 00:12:06.988 Data Area 4 for Telemetry Log: Not Supported 00:12:06.988 Error Log Page Entries Supported: 128 00:12:06.988 Keep Alive: Supported 00:12:06.988 Keep Alive Granularity: 10000 ms 00:12:06.988 00:12:06.988 NVM Command Set Attributes 00:12:06.988 ========================== 00:12:06.988 Submission Queue Entry Size 00:12:06.988 Max: 64 00:12:06.988 Min: 64 00:12:06.988 Completion Queue Entry Size 00:12:06.988 Max: 16 00:12:06.988 Min: 16 00:12:06.988 Number of Namespaces: 32 00:12:06.988 Compare Command: Supported 00:12:06.988 Write Uncorrectable Command: Not Supported 00:12:06.988 Dataset Management Command: Supported 00:12:06.988 Write Zeroes Command: Supported 00:12:06.988 Set Features Save Field: Not Supported 00:12:06.988 Reservations: Not Supported 00:12:06.988 Timestamp: Not Supported 00:12:06.988 Copy: Supported 00:12:06.988 Volatile Write Cache: Present 00:12:06.988 Atomic Write Unit (Normal): 1 00:12:06.988 Atomic Write Unit (PFail): 1 00:12:06.988 Atomic Compare & Write Unit: 1 00:12:06.988 Fused Compare & Write: Supported 00:12:06.988 Scatter-Gather List 00:12:06.988 SGL Command Set: Supported (Dword aligned) 00:12:06.988 SGL Keyed: Not Supported 00:12:06.988 SGL Bit Bucket Descriptor: Not Supported 00:12:06.988 SGL Metadata Pointer: Not Supported 00:12:06.988 Oversized SGL: Not Supported 00:12:06.988 SGL Metadata Address: Not Supported 00:12:06.988 SGL Offset: Not Supported 00:12:06.988 Transport SGL Data Block: Not Supported 00:12:06.988 Replay Protected Memory Block: Not Supported 00:12:06.988 00:12:06.988 Firmware Slot Information 00:12:06.988 ========================= 00:12:06.988 Active slot: 1 00:12:06.988 Slot 1 Firmware Revision: 24.05 00:12:06.988 00:12:06.988 00:12:06.988 Commands Supported and Effects 00:12:06.988 ============================== 00:12:06.988 Admin Commands 00:12:06.988 -------------- 00:12:06.988 Get Log Page (02h): Supported 00:12:06.988 Identify (06h): Supported 00:12:06.988 Abort (08h): Supported 00:12:06.988 Set Features (09h): Supported 00:12:06.988 Get Features (0Ah): Supported 00:12:06.988 Asynchronous Event Request (0Ch): Supported 00:12:06.988 Keep Alive (18h): Supported 00:12:06.988 I/O Commands 00:12:06.988 ------------ 00:12:06.989 Flush (00h): Supported LBA-Change 00:12:06.989 Write (01h): Supported LBA-Change 00:12:06.989 Read (02h): Supported 00:12:06.989 Compare (05h): Supported 00:12:06.989 Write Zeroes (08h): Supported LBA-Change 00:12:06.989 Dataset Management (09h): Supported LBA-Change 00:12:06.989 Copy (19h): Supported LBA-Change 00:12:06.989 Unknown (79h): Supported LBA-Change 00:12:06.989 Unknown (7Ah): Supported 00:12:06.989 00:12:06.989 Error Log 00:12:06.989 ========= 00:12:06.989 00:12:06.989 Arbitration 00:12:06.989 =========== 00:12:06.989 Arbitration Burst: 1 00:12:06.989 00:12:06.989 Power Management 00:12:06.989 ================ 00:12:06.989 Number of Power States: 1 00:12:06.989 Current Power State: Power State #0 00:12:06.989 Power State #0: 00:12:06.989 Max Power: 0.00 W 00:12:06.989 Non-Operational State: Operational 00:12:06.989 Entry Latency: Not Reported 00:12:06.989 Exit Latency: Not Reported 00:12:06.989 Relative Read Throughput: 0 00:12:06.989 Relative Read Latency: 0 00:12:06.989 Relative Write Throughput: 0 00:12:06.989 Relative Write Latency: 0 00:12:06.989 Idle Power: Not Reported 00:12:06.989 Active Power: Not Reported 00:12:06.989 Non-Operational Permissive Mode: Not Supported 00:12:06.989 00:12:06.989 Health Information 00:12:06.989 ================== 00:12:06.989 Critical Warnings: 00:12:06.989 Available Spare Space: OK 00:12:06.989 Temperature: OK 00:12:06.989 Device Reliability: OK 00:12:06.989 Read Only: No 00:12:06.989 Volatile Memory Backup: OK 00:12:06.989 Current Temperature: 0 Kelvin (-2[2024-04-26 23:56:37.164355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:06.989 [2024-04-26 23:56:37.164368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:06.989 [2024-04-26 23:56:37.164393] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:06.989 [2024-04-26 23:56:37.164403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.989 [2024-04-26 23:56:37.164410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.989 [2024-04-26 23:56:37.164416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.989 [2024-04-26 23:56:37.164422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.989 [2024-04-26 23:56:37.167844] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:06.989 [2024-04-26 23:56:37.167856] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:06.989 [2024-04-26 23:56:37.168385] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:06.989 [2024-04-26 23:56:37.168434] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:06.989 [2024-04-26 23:56:37.168441] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:06.989 [2024-04-26 23:56:37.169402] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:06.989 [2024-04-26 23:56:37.169415] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:06.989 [2024-04-26 23:56:37.169482] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:06.989 [2024-04-26 23:56:37.171439] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:07.251 73 Celsius) 00:12:07.251 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:07.251 Available Spare: 0% 00:12:07.251 Available Spare Threshold: 0% 00:12:07.251 Life Percentage Used: 0% 00:12:07.251 Data Units Read: 0 00:12:07.251 Data Units Written: 0 00:12:07.251 Host Read Commands: 0 00:12:07.251 Host Write Commands: 0 00:12:07.251 Controller Busy Time: 0 minutes 00:12:07.251 Power Cycles: 0 00:12:07.251 Power On Hours: 0 hours 00:12:07.251 Unsafe Shutdowns: 0 00:12:07.251 Unrecoverable Media Errors: 0 00:12:07.251 Lifetime Error Log Entries: 0 00:12:07.251 Warning Temperature Time: 0 minutes 00:12:07.251 Critical Temperature Time: 0 minutes 00:12:07.251 00:12:07.251 Number of Queues 00:12:07.251 ================ 00:12:07.251 Number of I/O Submission Queues: 127 00:12:07.251 Number of I/O Completion Queues: 127 00:12:07.251 00:12:07.251 Active Namespaces 00:12:07.251 ================= 00:12:07.251 Namespace ID:1 00:12:07.251 Error Recovery Timeout: Unlimited 00:12:07.251 Command Set Identifier: NVM (00h) 00:12:07.251 Deallocate: Supported 00:12:07.251 Deallocated/Unwritten Error: Not Supported 00:12:07.251 Deallocated Read Value: Unknown 00:12:07.251 Deallocate in Write Zeroes: Not Supported 00:12:07.251 Deallocated Guard Field: 0xFFFF 00:12:07.251 Flush: Supported 00:12:07.251 Reservation: Supported 00:12:07.251 Namespace Sharing Capabilities: Multiple Controllers 00:12:07.251 Size (in LBAs): 131072 (0GiB) 00:12:07.251 Capacity (in LBAs): 131072 (0GiB) 00:12:07.251 Utilization (in LBAs): 131072 (0GiB) 00:12:07.251 NGUID: FC07B587190E4C82975E1BF58F0AC033 00:12:07.251 UUID: fc07b587-190e-4c82-975e-1bf58f0ac033 00:12:07.251 Thin Provisioning: Not Supported 00:12:07.251 Per-NS Atomic Units: Yes 00:12:07.251 Atomic Boundary Size (Normal): 0 00:12:07.251 Atomic Boundary Size (PFail): 0 00:12:07.251 Atomic Boundary Offset: 0 00:12:07.251 Maximum Single Source Range Length: 65535 00:12:07.251 Maximum Copy Length: 65535 00:12:07.251 Maximum Source Range Count: 1 00:12:07.251 NGUID/EUI64 Never Reused: No 00:12:07.251 Namespace Write Protected: No 00:12:07.251 Number of LBA Formats: 1 00:12:07.251 Current LBA Format: LBA Format #00 00:12:07.251 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:07.251 00:12:07.251 23:56:37 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:07.251 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.251 [2024-04-26 23:56:37.371560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:12.542 [2024-04-26 23:56:42.389691] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:12.542 Initializing NVMe Controllers 00:12:12.542 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:12.542 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:12.542 Initialization complete. Launching workers. 00:12:12.542 ======================================================== 00:12:12.542 Latency(us) 00:12:12.542 Device Information : IOPS MiB/s Average min max 00:12:12.542 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 44454.94 173.65 2878.79 898.40 7490.14 00:12:12.542 ======================================================== 00:12:12.542 Total : 44454.94 173.65 2878.79 898.40 7490.14 00:12:12.542 00:12:12.542 23:56:42 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:12.542 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.542 [2024-04-26 23:56:42.590688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:17.827 [2024-04-26 23:56:47.624586] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:17.827 Initializing NVMe Controllers 00:12:17.827 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:17.827 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:17.827 Initialization complete. Launching workers. 00:12:17.827 ======================================================== 00:12:17.827 Latency(us) 00:12:17.827 Device Information : IOPS MiB/s Average min max 00:12:17.827 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15615.02 61.00 8196.54 5984.90 16952.35 00:12:17.827 ======================================================== 00:12:17.827 Total : 15615.02 61.00 8196.54 5984.90 16952.35 00:12:17.827 00:12:17.827 23:56:47 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:17.827 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.827 [2024-04-26 23:56:47.847637] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:23.112 [2024-04-26 23:56:52.920070] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:23.112 Initializing NVMe Controllers 00:12:23.112 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:23.112 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:23.112 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:23.112 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:23.112 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:23.112 Initialization complete. Launching workers. 00:12:23.112 Starting thread on core 2 00:12:23.112 Starting thread on core 3 00:12:23.112 Starting thread on core 1 00:12:23.112 23:56:52 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:23.112 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.112 [2024-04-26 23:56:53.191243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:26.408 [2024-04-26 23:56:56.242515] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:26.408 Initializing NVMe Controllers 00:12:26.408 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.408 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.408 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:26.408 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:26.408 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:26.408 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:26.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:26.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:26.408 Initialization complete. Launching workers. 00:12:26.408 Starting thread on core 1 with urgent priority queue 00:12:26.408 Starting thread on core 2 with urgent priority queue 00:12:26.408 Starting thread on core 3 with urgent priority queue 00:12:26.408 Starting thread on core 0 with urgent priority queue 00:12:26.408 SPDK bdev Controller (SPDK1 ) core 0: 12506.33 IO/s 8.00 secs/100000 ios 00:12:26.408 SPDK bdev Controller (SPDK1 ) core 1: 7642.00 IO/s 13.09 secs/100000 ios 00:12:26.408 SPDK bdev Controller (SPDK1 ) core 2: 9970.67 IO/s 10.03 secs/100000 ios 00:12:26.408 SPDK bdev Controller (SPDK1 ) core 3: 8245.67 IO/s 12.13 secs/100000 ios 00:12:26.408 ======================================================== 00:12:26.408 00:12:26.408 23:56:56 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:26.408 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.408 [2024-04-26 23:56:56.504340] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:26.408 [2024-04-26 23:56:56.538534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:26.408 Initializing NVMe Controllers 00:12:26.408 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.408 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.408 Namespace ID: 1 size: 0GB 00:12:26.408 Initialization complete. 00:12:26.408 INFO: using host memory buffer for IO 00:12:26.408 Hello world! 00:12:26.408 23:56:56 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:26.669 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.669 [2024-04-26 23:56:56.798330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.608 Initializing NVMe Controllers 00:12:27.608 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.608 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.608 Initialization complete. Launching workers. 00:12:27.608 submit (in ns) avg, min, max = 8833.9, 3867.5, 4000835.0 00:12:27.608 complete (in ns) avg, min, max = 17096.7, 2353.3, 3998493.3 00:12:27.608 00:12:27.608 Submit histogram 00:12:27.608 ================ 00:12:27.608 Range in us Cumulative Count 00:12:27.608 3.867 - 3.893: 1.0875% ( 217) 00:12:27.608 3.893 - 3.920: 6.3446% ( 1049) 00:12:27.608 3.920 - 3.947: 15.5257% ( 1832) 00:12:27.608 3.947 - 3.973: 26.5912% ( 2208) 00:12:27.608 3.973 - 4.000: 39.4808% ( 2572) 00:12:27.608 4.000 - 4.027: 54.2097% ( 2939) 00:12:27.608 4.027 - 4.053: 71.0284% ( 3356) 00:12:27.608 4.053 - 4.080: 83.8278% ( 2554) 00:12:27.608 4.080 - 4.107: 91.8813% ( 1607) 00:12:27.608 4.107 - 4.133: 96.4569% ( 913) 00:12:27.608 4.133 - 4.160: 98.5767% ( 423) 00:12:27.608 4.160 - 4.187: 99.1781% ( 120) 00:12:27.608 4.187 - 4.213: 99.4086% ( 46) 00:12:27.608 4.213 - 4.240: 99.4537% ( 9) 00:12:27.608 4.240 - 4.267: 99.4738% ( 4) 00:12:27.608 4.267 - 4.293: 99.4838% ( 2) 00:12:27.608 4.880 - 4.907: 99.4888% ( 1) 00:12:27.608 5.067 - 5.093: 99.4938% ( 1) 00:12:27.608 5.093 - 5.120: 99.4988% ( 1) 00:12:27.608 5.280 - 5.307: 99.5089% ( 2) 00:12:27.608 5.387 - 5.413: 99.5139% ( 1) 00:12:27.608 5.413 - 5.440: 99.5189% ( 1) 00:12:27.608 5.520 - 5.547: 99.5239% ( 1) 00:12:27.608 5.573 - 5.600: 99.5289% ( 1) 00:12:27.608 5.760 - 5.787: 99.5440% ( 3) 00:12:27.608 5.787 - 5.813: 99.5490% ( 1) 00:12:27.608 5.920 - 5.947: 99.5540% ( 1) 00:12:27.608 5.973 - 6.000: 99.5590% ( 1) 00:12:27.608 6.000 - 6.027: 99.5640% ( 1) 00:12:27.608 6.027 - 6.053: 99.5690% ( 1) 00:12:27.608 6.053 - 6.080: 99.5740% ( 1) 00:12:27.608 6.080 - 6.107: 99.5790% ( 1) 00:12:27.608 6.107 - 6.133: 99.5840% ( 1) 00:12:27.608 6.187 - 6.213: 99.5941% ( 2) 00:12:27.608 6.213 - 6.240: 99.6091% ( 3) 00:12:27.608 6.293 - 6.320: 99.6191% ( 2) 00:12:27.608 6.320 - 6.347: 99.6241% ( 1) 00:12:27.608 6.373 - 6.400: 99.6291% ( 1) 00:12:27.608 6.427 - 6.453: 99.6342% ( 1) 00:12:27.608 6.453 - 6.480: 99.6392% ( 1) 00:12:27.608 6.507 - 6.533: 99.6442% ( 1) 00:12:27.608 6.533 - 6.560: 99.6542% ( 2) 00:12:27.608 6.613 - 6.640: 99.6592% ( 1) 00:12:27.608 6.640 - 6.667: 99.6642% ( 1) 00:12:27.608 6.693 - 6.720: 99.6692% ( 1) 00:12:27.608 6.800 - 6.827: 99.6743% ( 1) 00:12:27.608 6.933 - 6.987: 99.6793% ( 1) 00:12:27.608 6.987 - 7.040: 99.6893% ( 2) 00:12:27.608 7.040 - 7.093: 99.6993% ( 2) 00:12:27.608 7.093 - 7.147: 99.7093% ( 2) 00:12:27.608 7.200 - 7.253: 99.7294% ( 4) 00:12:27.608 7.307 - 7.360: 99.7394% ( 2) 00:12:27.608 7.360 - 7.413: 99.7594% ( 4) 00:12:27.608 7.413 - 7.467: 99.7745% ( 3) 00:12:27.608 7.467 - 7.520: 99.7795% ( 1) 00:12:27.608 7.573 - 7.627: 99.7845% ( 1) 00:12:27.608 7.680 - 7.733: 99.7895% ( 1) 00:12:27.608 7.733 - 7.787: 99.7945% ( 1) 00:12:27.608 7.840 - 7.893: 99.8046% ( 2) 00:12:27.608 7.893 - 7.947: 99.8146% ( 2) 00:12:27.608 8.000 - 8.053: 99.8296% ( 3) 00:12:27.608 8.107 - 8.160: 99.8346% ( 1) 00:12:27.608 8.213 - 8.267: 99.8396% ( 1) 00:12:27.608 8.267 - 8.320: 99.8446% ( 1) 00:12:27.608 8.320 - 8.373: 99.8497% ( 1) 00:12:27.608 8.533 - 8.587: 99.8547% ( 1) 00:12:27.608 8.693 - 8.747: 99.8597% ( 1) 00:12:27.608 9.333 - 9.387: 99.8647% ( 1) 00:12:27.608 13.653 - 13.760: 99.8697% ( 1) 00:12:27.608 14.613 - 14.720: 99.8747% ( 1) 00:12:27.608 15.147 - 15.253: 99.8797% ( 1) 00:12:27.608 3986.773 - 4014.080: 100.0000% ( 24) 00:12:27.608 00:12:27.608 Complete histogram 00:12:27.608 ================== 00:12:27.608 Range in us Cumulative Count 00:12:27.608 2.347 - 2.360: 0.0050% ( 1) 00:12:27.608 2.360 - 2.373: 0.0551% ( 10) 00:12:27.608 2.373 - 2.387: 1.1075% ( 210) 00:12:27.608 2.387 - 2.400: 1.2078% ( 20) 00:12:27.608 2.400 - 2.413: 1.3030% ( 19) 00:12:27.608 2.413 - [2024-04-26 23:56:57.818310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.870 2.427: 1.3180% ( 3) 00:12:27.870 2.427 - 2.440: 31.0113% ( 5925) 00:12:27.870 2.440 - 2.453: 60.4591% ( 5876) 00:12:27.870 2.453 - 2.467: 69.5299% ( 1810) 00:12:27.870 2.467 - 2.480: 77.8491% ( 1660) 00:12:27.870 2.480 - 2.493: 81.4022% ( 709) 00:12:27.870 2.493 - 2.507: 83.4920% ( 417) 00:12:27.870 2.507 - 2.520: 88.5837% ( 1016) 00:12:27.870 2.520 - 2.533: 93.8759% ( 1056) 00:12:27.870 2.533 - 2.547: 96.8077% ( 585) 00:12:27.870 2.547 - 2.560: 98.4013% ( 318) 00:12:27.870 2.560 - 2.573: 99.1881% ( 157) 00:12:27.870 2.573 - 2.587: 99.3786% ( 38) 00:12:27.870 2.587 - 2.600: 99.3986% ( 4) 00:12:27.870 2.600 - 2.613: 99.4036% ( 1) 00:12:27.870 2.627 - 2.640: 99.4086% ( 1) 00:12:27.870 4.560 - 4.587: 99.4137% ( 1) 00:12:27.870 4.640 - 4.667: 99.4237% ( 2) 00:12:27.870 4.667 - 4.693: 99.4287% ( 1) 00:12:27.870 4.800 - 4.827: 99.4337% ( 1) 00:12:27.870 4.853 - 4.880: 99.4387% ( 1) 00:12:27.870 5.013 - 5.040: 99.4487% ( 2) 00:12:27.870 5.120 - 5.147: 99.4537% ( 1) 00:12:27.870 5.253 - 5.280: 99.4688% ( 3) 00:12:27.870 5.360 - 5.387: 99.4788% ( 2) 00:12:27.870 5.387 - 5.413: 99.4838% ( 1) 00:12:27.870 5.413 - 5.440: 99.4888% ( 1) 00:12:27.870 5.467 - 5.493: 99.4938% ( 1) 00:12:27.870 5.493 - 5.520: 99.5039% ( 2) 00:12:27.870 5.520 - 5.547: 99.5089% ( 1) 00:12:27.870 5.547 - 5.573: 99.5189% ( 2) 00:12:27.870 5.600 - 5.627: 99.5289% ( 2) 00:12:27.870 5.627 - 5.653: 99.5389% ( 2) 00:12:27.870 5.680 - 5.707: 99.5440% ( 1) 00:12:27.870 5.733 - 5.760: 99.5490% ( 1) 00:12:27.870 5.760 - 5.787: 99.5590% ( 2) 00:12:27.870 5.867 - 5.893: 99.5690% ( 2) 00:12:27.870 5.920 - 5.947: 99.5740% ( 1) 00:12:27.870 6.107 - 6.133: 99.5790% ( 1) 00:12:27.870 6.160 - 6.187: 99.5891% ( 2) 00:12:27.870 6.507 - 6.533: 99.5991% ( 2) 00:12:27.870 6.613 - 6.640: 99.6041% ( 1) 00:12:27.870 7.787 - 7.840: 99.6141% ( 2) 00:12:27.870 11.787 - 11.840: 99.6191% ( 1) 00:12:27.870 13.440 - 13.493: 99.6241% ( 1) 00:12:27.870 13.653 - 13.760: 99.6291% ( 1) 00:12:27.870 256.000 - 257.707: 99.6342% ( 1) 00:12:27.870 3986.773 - 4014.080: 100.0000% ( 73) 00:12:27.870 00:12:27.870 23:56:57 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:27.870 23:56:57 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:27.870 23:56:57 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:27.870 23:56:57 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:27.870 23:56:57 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:27.870 [2024-04-26 23:56:58.006018] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:27.870 [ 00:12:27.870 { 00:12:27.870 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:27.870 "subtype": "Discovery", 00:12:27.870 "listen_addresses": [], 00:12:27.870 "allow_any_host": true, 00:12:27.870 "hosts": [] 00:12:27.870 }, 00:12:27.870 { 00:12:27.870 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:27.870 "subtype": "NVMe", 00:12:27.870 "listen_addresses": [ 00:12:27.870 { 00:12:27.870 "transport": "VFIOUSER", 00:12:27.870 "trtype": "VFIOUSER", 00:12:27.870 "adrfam": "IPv4", 00:12:27.870 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:27.870 "trsvcid": "0" 00:12:27.870 } 00:12:27.870 ], 00:12:27.870 "allow_any_host": true, 00:12:27.870 "hosts": [], 00:12:27.870 "serial_number": "SPDK1", 00:12:27.870 "model_number": "SPDK bdev Controller", 00:12:27.870 "max_namespaces": 32, 00:12:27.870 "min_cntlid": 1, 00:12:27.870 "max_cntlid": 65519, 00:12:27.870 "namespaces": [ 00:12:27.870 { 00:12:27.870 "nsid": 1, 00:12:27.870 "bdev_name": "Malloc1", 00:12:27.870 "name": "Malloc1", 00:12:27.870 "nguid": "FC07B587190E4C82975E1BF58F0AC033", 00:12:27.870 "uuid": "fc07b587-190e-4c82-975e-1bf58f0ac033" 00:12:27.870 } 00:12:27.870 ] 00:12:27.870 }, 00:12:27.870 { 00:12:27.870 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:27.870 "subtype": "NVMe", 00:12:27.870 "listen_addresses": [ 00:12:27.870 { 00:12:27.870 "transport": "VFIOUSER", 00:12:27.870 "trtype": "VFIOUSER", 00:12:27.870 "adrfam": "IPv4", 00:12:27.870 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:27.870 "trsvcid": "0" 00:12:27.870 } 00:12:27.870 ], 00:12:27.870 "allow_any_host": true, 00:12:27.870 "hosts": [], 00:12:27.870 "serial_number": "SPDK2", 00:12:27.870 "model_number": "SPDK bdev Controller", 00:12:27.870 "max_namespaces": 32, 00:12:27.870 "min_cntlid": 1, 00:12:27.870 "max_cntlid": 65519, 00:12:27.870 "namespaces": [ 00:12:27.870 { 00:12:27.870 "nsid": 1, 00:12:27.870 "bdev_name": "Malloc2", 00:12:27.870 "name": "Malloc2", 00:12:27.870 "nguid": "D41C41CBC3D74AA18E31E6BB441EC237", 00:12:27.870 "uuid": "d41c41cb-c3d7-4aa1-8e31-e6bb441ec237" 00:12:27.870 } 00:12:27.870 ] 00:12:27.870 } 00:12:27.870 ] 00:12:27.870 23:56:58 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:27.870 23:56:58 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:27.870 23:56:58 -- target/nvmf_vfio_user.sh@34 -- # aerpid=312198 00:12:27.870 23:56:58 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:27.870 23:56:58 -- common/autotest_common.sh@1251 -- # local i=0 00:12:27.870 23:56:58 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:27.870 23:56:58 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:27.870 23:56:58 -- common/autotest_common.sh@1262 -- # return 0 00:12:27.870 23:56:58 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:27.870 23:56:58 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:27.870 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.131 Malloc3 00:12:28.131 [2024-04-26 23:56:58.195227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:28.131 23:56:58 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:28.392 [2024-04-26 23:56:58.364279] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:28.392 23:56:58 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:28.392 Asynchronous Event Request test 00:12:28.392 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.392 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.392 Registering asynchronous event callbacks... 00:12:28.392 Starting namespace attribute notice tests for all controllers... 00:12:28.392 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:28.392 aer_cb - Changed Namespace 00:12:28.392 Cleaning up... 00:12:28.392 [ 00:12:28.392 { 00:12:28.392 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:28.392 "subtype": "Discovery", 00:12:28.392 "listen_addresses": [], 00:12:28.392 "allow_any_host": true, 00:12:28.392 "hosts": [] 00:12:28.392 }, 00:12:28.392 { 00:12:28.392 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:28.392 "subtype": "NVMe", 00:12:28.392 "listen_addresses": [ 00:12:28.392 { 00:12:28.392 "transport": "VFIOUSER", 00:12:28.392 "trtype": "VFIOUSER", 00:12:28.392 "adrfam": "IPv4", 00:12:28.392 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:28.392 "trsvcid": "0" 00:12:28.392 } 00:12:28.392 ], 00:12:28.392 "allow_any_host": true, 00:12:28.392 "hosts": [], 00:12:28.392 "serial_number": "SPDK1", 00:12:28.392 "model_number": "SPDK bdev Controller", 00:12:28.392 "max_namespaces": 32, 00:12:28.392 "min_cntlid": 1, 00:12:28.392 "max_cntlid": 65519, 00:12:28.392 "namespaces": [ 00:12:28.392 { 00:12:28.392 "nsid": 1, 00:12:28.392 "bdev_name": "Malloc1", 00:12:28.392 "name": "Malloc1", 00:12:28.392 "nguid": "FC07B587190E4C82975E1BF58F0AC033", 00:12:28.392 "uuid": "fc07b587-190e-4c82-975e-1bf58f0ac033" 00:12:28.392 }, 00:12:28.392 { 00:12:28.392 "nsid": 2, 00:12:28.392 "bdev_name": "Malloc3", 00:12:28.392 "name": "Malloc3", 00:12:28.392 "nguid": "B176076BF71A4247B08D88FE4554E069", 00:12:28.392 "uuid": "b176076b-f71a-4247-b08d-88fe4554e069" 00:12:28.392 } 00:12:28.392 ] 00:12:28.392 }, 00:12:28.392 { 00:12:28.392 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:28.392 "subtype": "NVMe", 00:12:28.392 "listen_addresses": [ 00:12:28.392 { 00:12:28.392 "transport": "VFIOUSER", 00:12:28.392 "trtype": "VFIOUSER", 00:12:28.392 "adrfam": "IPv4", 00:12:28.392 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:28.392 "trsvcid": "0" 00:12:28.392 } 00:12:28.392 ], 00:12:28.392 "allow_any_host": true, 00:12:28.392 "hosts": [], 00:12:28.392 "serial_number": "SPDK2", 00:12:28.392 "model_number": "SPDK bdev Controller", 00:12:28.392 "max_namespaces": 32, 00:12:28.392 "min_cntlid": 1, 00:12:28.392 "max_cntlid": 65519, 00:12:28.392 "namespaces": [ 00:12:28.392 { 00:12:28.392 "nsid": 1, 00:12:28.392 "bdev_name": "Malloc2", 00:12:28.392 "name": "Malloc2", 00:12:28.392 "nguid": "D41C41CBC3D74AA18E31E6BB441EC237", 00:12:28.392 "uuid": "d41c41cb-c3d7-4aa1-8e31-e6bb441ec237" 00:12:28.392 } 00:12:28.392 ] 00:12:28.392 } 00:12:28.392 ] 00:12:28.392 23:56:58 -- target/nvmf_vfio_user.sh@44 -- # wait 312198 00:12:28.392 23:56:58 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:28.392 23:56:58 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:28.392 23:56:58 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:28.392 23:56:58 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:28.392 [2024-04-26 23:56:58.579821] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:12:28.392 [2024-04-26 23:56:58.579876] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid312217 ] 00:12:28.392 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.655 [2024-04-26 23:56:58.612362] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:28.655 [2024-04-26 23:56:58.621065] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:28.655 [2024-04-26 23:56:58.621085] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f02e4b80000 00:12:28.655 [2024-04-26 23:56:58.622072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:28.655 [2024-04-26 23:56:58.623079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:28.655 [2024-04-26 23:56:58.624081] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:28.655 [2024-04-26 23:56:58.625092] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:28.655 [2024-04-26 23:56:58.626101] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:28.655 [2024-04-26 23:56:58.627102] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:28.655 [2024-04-26 23:56:58.628105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:28.655 [2024-04-26 23:56:58.629130] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:28.655 [2024-04-26 23:56:58.630132] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:28.655 [2024-04-26 23:56:58.630145] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f02e4b75000 00:12:28.655 [2024-04-26 23:56:58.631470] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:28.655 [2024-04-26 23:56:58.647672] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:28.655 [2024-04-26 23:56:58.647695] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:28.655 [2024-04-26 23:56:58.652786] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:28.655 [2024-04-26 23:56:58.652830] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:28.655 [2024-04-26 23:56:58.652915] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:28.655 [2024-04-26 23:56:58.652932] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:28.655 [2024-04-26 23:56:58.652937] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:28.655 [2024-04-26 23:56:58.653796] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:28.655 [2024-04-26 23:56:58.653805] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:28.655 [2024-04-26 23:56:58.653812] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:28.655 [2024-04-26 23:56:58.654798] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:28.655 [2024-04-26 23:56:58.654807] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:28.655 [2024-04-26 23:56:58.654815] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:28.655 [2024-04-26 23:56:58.655803] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:28.655 [2024-04-26 23:56:58.655813] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:28.655 [2024-04-26 23:56:58.656813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:28.655 [2024-04-26 23:56:58.656821] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:28.655 [2024-04-26 23:56:58.656826] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:28.655 [2024-04-26 23:56:58.656841] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:28.655 [2024-04-26 23:56:58.656947] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:28.655 [2024-04-26 23:56:58.656952] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:28.655 [2024-04-26 23:56:58.656956] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:28.655 [2024-04-26 23:56:58.657820] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:28.655 [2024-04-26 23:56:58.658820] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:28.655 [2024-04-26 23:56:58.659827] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:28.655 [2024-04-26 23:56:58.660835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:28.655 [2024-04-26 23:56:58.660876] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:28.655 [2024-04-26 23:56:58.661844] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:28.655 [2024-04-26 23:56:58.661853] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:28.655 [2024-04-26 23:56:58.661858] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:28.655 [2024-04-26 23:56:58.661878] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:28.655 [2024-04-26 23:56:58.661886] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:28.655 [2024-04-26 23:56:58.661900] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:28.655 [2024-04-26 23:56:58.661905] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.655 [2024-04-26 23:56:58.661917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.655 [2024-04-26 23:56:58.670845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:28.655 [2024-04-26 23:56:58.670858] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:28.655 [2024-04-26 23:56:58.670863] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:28.655 [2024-04-26 23:56:58.670867] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:28.655 [2024-04-26 23:56:58.670872] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:28.655 [2024-04-26 23:56:58.670876] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:28.655 [2024-04-26 23:56:58.670881] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:28.655 [2024-04-26 23:56:58.670885] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:28.655 [2024-04-26 23:56:58.670893] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:28.655 [2024-04-26 23:56:58.670906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:28.655 [2024-04-26 23:56:58.678845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:28.655 [2024-04-26 23:56:58.678859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.655 [2024-04-26 23:56:58.678868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.656 [2024-04-26 23:56:58.678876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.656 [2024-04-26 23:56:58.678884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.656 [2024-04-26 23:56:58.678889] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.678897] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.678906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.686845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.686853] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:28.656 [2024-04-26 23:56:58.686858] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.686867] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.686872] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.686881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.694844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.694895] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.694903] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.694910] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:28.656 [2024-04-26 23:56:58.694915] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:28.656 [2024-04-26 23:56:58.694921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.702844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.702854] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:28.656 [2024-04-26 23:56:58.702867] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.702875] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.702884] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:28.656 [2024-04-26 23:56:58.702888] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.656 [2024-04-26 23:56:58.702895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.710844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.710858] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.710866] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.710873] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:28.656 [2024-04-26 23:56:58.710878] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.656 [2024-04-26 23:56:58.710884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.718845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.718855] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.718861] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.718869] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.718874] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.718879] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.718884] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:28.656 [2024-04-26 23:56:58.718888] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:28.656 [2024-04-26 23:56:58.718893] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:28.656 [2024-04-26 23:56:58.718909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.726843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.726856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.734843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.734856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.742844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.742857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.750844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.750857] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:28.656 [2024-04-26 23:56:58.750862] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:28.656 [2024-04-26 23:56:58.750865] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:28.656 [2024-04-26 23:56:58.750868] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:28.656 [2024-04-26 23:56:58.750875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:28.656 [2024-04-26 23:56:58.750882] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:28.656 [2024-04-26 23:56:58.750887] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:28.656 [2024-04-26 23:56:58.750892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.750900] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:28.656 [2024-04-26 23:56:58.750904] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.656 [2024-04-26 23:56:58.750910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.750917] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:28.656 [2024-04-26 23:56:58.750921] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:28.656 [2024-04-26 23:56:58.750927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:28.656 [2024-04-26 23:56:58.758845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.758860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.758869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:28.656 [2024-04-26 23:56:58.758876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:28.656 ===================================================== 00:12:28.656 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:28.656 ===================================================== 00:12:28.656 Controller Capabilities/Features 00:12:28.656 ================================ 00:12:28.656 Vendor ID: 4e58 00:12:28.656 Subsystem Vendor ID: 4e58 00:12:28.656 Serial Number: SPDK2 00:12:28.656 Model Number: SPDK bdev Controller 00:12:28.656 Firmware Version: 24.05 00:12:28.656 Recommended Arb Burst: 6 00:12:28.656 IEEE OUI Identifier: 8d 6b 50 00:12:28.656 Multi-path I/O 00:12:28.656 May have multiple subsystem ports: Yes 00:12:28.656 May have multiple controllers: Yes 00:12:28.656 Associated with SR-IOV VF: No 00:12:28.656 Max Data Transfer Size: 131072 00:12:28.656 Max Number of Namespaces: 32 00:12:28.656 Max Number of I/O Queues: 127 00:12:28.656 NVMe Specification Version (VS): 1.3 00:12:28.656 NVMe Specification Version (Identify): 1.3 00:12:28.656 Maximum Queue Entries: 256 00:12:28.656 Contiguous Queues Required: Yes 00:12:28.656 Arbitration Mechanisms Supported 00:12:28.656 Weighted Round Robin: Not Supported 00:12:28.657 Vendor Specific: Not Supported 00:12:28.657 Reset Timeout: 15000 ms 00:12:28.657 Doorbell Stride: 4 bytes 00:12:28.657 NVM Subsystem Reset: Not Supported 00:12:28.657 Command Sets Supported 00:12:28.657 NVM Command Set: Supported 00:12:28.657 Boot Partition: Not Supported 00:12:28.657 Memory Page Size Minimum: 4096 bytes 00:12:28.657 Memory Page Size Maximum: 4096 bytes 00:12:28.657 Persistent Memory Region: Not Supported 00:12:28.657 Optional Asynchronous Events Supported 00:12:28.657 Namespace Attribute Notices: Supported 00:12:28.657 Firmware Activation Notices: Not Supported 00:12:28.657 ANA Change Notices: Not Supported 00:12:28.657 PLE Aggregate Log Change Notices: Not Supported 00:12:28.657 LBA Status Info Alert Notices: Not Supported 00:12:28.657 EGE Aggregate Log Change Notices: Not Supported 00:12:28.657 Normal NVM Subsystem Shutdown event: Not Supported 00:12:28.657 Zone Descriptor Change Notices: Not Supported 00:12:28.657 Discovery Log Change Notices: Not Supported 00:12:28.657 Controller Attributes 00:12:28.657 128-bit Host Identifier: Supported 00:12:28.657 Non-Operational Permissive Mode: Not Supported 00:12:28.657 NVM Sets: Not Supported 00:12:28.657 Read Recovery Levels: Not Supported 00:12:28.657 Endurance Groups: Not Supported 00:12:28.657 Predictable Latency Mode: Not Supported 00:12:28.657 Traffic Based Keep ALive: Not Supported 00:12:28.657 Namespace Granularity: Not Supported 00:12:28.657 SQ Associations: Not Supported 00:12:28.657 UUID List: Not Supported 00:12:28.657 Multi-Domain Subsystem: Not Supported 00:12:28.657 Fixed Capacity Management: Not Supported 00:12:28.657 Variable Capacity Management: Not Supported 00:12:28.657 Delete Endurance Group: Not Supported 00:12:28.657 Delete NVM Set: Not Supported 00:12:28.657 Extended LBA Formats Supported: Not Supported 00:12:28.657 Flexible Data Placement Supported: Not Supported 00:12:28.657 00:12:28.657 Controller Memory Buffer Support 00:12:28.657 ================================ 00:12:28.657 Supported: No 00:12:28.657 00:12:28.657 Persistent Memory Region Support 00:12:28.657 ================================ 00:12:28.657 Supported: No 00:12:28.657 00:12:28.657 Admin Command Set Attributes 00:12:28.657 ============================ 00:12:28.657 Security Send/Receive: Not Supported 00:12:28.657 Format NVM: Not Supported 00:12:28.657 Firmware Activate/Download: Not Supported 00:12:28.657 Namespace Management: Not Supported 00:12:28.657 Device Self-Test: Not Supported 00:12:28.657 Directives: Not Supported 00:12:28.657 NVMe-MI: Not Supported 00:12:28.657 Virtualization Management: Not Supported 00:12:28.657 Doorbell Buffer Config: Not Supported 00:12:28.657 Get LBA Status Capability: Not Supported 00:12:28.657 Command & Feature Lockdown Capability: Not Supported 00:12:28.657 Abort Command Limit: 4 00:12:28.657 Async Event Request Limit: 4 00:12:28.657 Number of Firmware Slots: N/A 00:12:28.657 Firmware Slot 1 Read-Only: N/A 00:12:28.657 Firmware Activation Without Reset: N/A 00:12:28.657 Multiple Update Detection Support: N/A 00:12:28.657 Firmware Update Granularity: No Information Provided 00:12:28.657 Per-Namespace SMART Log: No 00:12:28.657 Asymmetric Namespace Access Log Page: Not Supported 00:12:28.657 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:28.657 Command Effects Log Page: Supported 00:12:28.657 Get Log Page Extended Data: Supported 00:12:28.657 Telemetry Log Pages: Not Supported 00:12:28.657 Persistent Event Log Pages: Not Supported 00:12:28.657 Supported Log Pages Log Page: May Support 00:12:28.657 Commands Supported & Effects Log Page: Not Supported 00:12:28.657 Feature Identifiers & Effects Log Page:May Support 00:12:28.657 NVMe-MI Commands & Effects Log Page: May Support 00:12:28.657 Data Area 4 for Telemetry Log: Not Supported 00:12:28.657 Error Log Page Entries Supported: 128 00:12:28.657 Keep Alive: Supported 00:12:28.657 Keep Alive Granularity: 10000 ms 00:12:28.657 00:12:28.657 NVM Command Set Attributes 00:12:28.657 ========================== 00:12:28.657 Submission Queue Entry Size 00:12:28.657 Max: 64 00:12:28.657 Min: 64 00:12:28.657 Completion Queue Entry Size 00:12:28.657 Max: 16 00:12:28.657 Min: 16 00:12:28.657 Number of Namespaces: 32 00:12:28.657 Compare Command: Supported 00:12:28.657 Write Uncorrectable Command: Not Supported 00:12:28.657 Dataset Management Command: Supported 00:12:28.657 Write Zeroes Command: Supported 00:12:28.657 Set Features Save Field: Not Supported 00:12:28.657 Reservations: Not Supported 00:12:28.657 Timestamp: Not Supported 00:12:28.657 Copy: Supported 00:12:28.657 Volatile Write Cache: Present 00:12:28.657 Atomic Write Unit (Normal): 1 00:12:28.657 Atomic Write Unit (PFail): 1 00:12:28.657 Atomic Compare & Write Unit: 1 00:12:28.657 Fused Compare & Write: Supported 00:12:28.657 Scatter-Gather List 00:12:28.657 SGL Command Set: Supported (Dword aligned) 00:12:28.657 SGL Keyed: Not Supported 00:12:28.657 SGL Bit Bucket Descriptor: Not Supported 00:12:28.657 SGL Metadata Pointer: Not Supported 00:12:28.657 Oversized SGL: Not Supported 00:12:28.657 SGL Metadata Address: Not Supported 00:12:28.657 SGL Offset: Not Supported 00:12:28.657 Transport SGL Data Block: Not Supported 00:12:28.657 Replay Protected Memory Block: Not Supported 00:12:28.657 00:12:28.657 Firmware Slot Information 00:12:28.657 ========================= 00:12:28.657 Active slot: 1 00:12:28.657 Slot 1 Firmware Revision: 24.05 00:12:28.657 00:12:28.657 00:12:28.657 Commands Supported and Effects 00:12:28.657 ============================== 00:12:28.657 Admin Commands 00:12:28.657 -------------- 00:12:28.657 Get Log Page (02h): Supported 00:12:28.657 Identify (06h): Supported 00:12:28.657 Abort (08h): Supported 00:12:28.657 Set Features (09h): Supported 00:12:28.657 Get Features (0Ah): Supported 00:12:28.657 Asynchronous Event Request (0Ch): Supported 00:12:28.657 Keep Alive (18h): Supported 00:12:28.657 I/O Commands 00:12:28.657 ------------ 00:12:28.657 Flush (00h): Supported LBA-Change 00:12:28.657 Write (01h): Supported LBA-Change 00:12:28.657 Read (02h): Supported 00:12:28.657 Compare (05h): Supported 00:12:28.657 Write Zeroes (08h): Supported LBA-Change 00:12:28.657 Dataset Management (09h): Supported LBA-Change 00:12:28.657 Copy (19h): Supported LBA-Change 00:12:28.657 Unknown (79h): Supported LBA-Change 00:12:28.657 Unknown (7Ah): Supported 00:12:28.657 00:12:28.657 Error Log 00:12:28.657 ========= 00:12:28.657 00:12:28.657 Arbitration 00:12:28.657 =========== 00:12:28.657 Arbitration Burst: 1 00:12:28.657 00:12:28.657 Power Management 00:12:28.657 ================ 00:12:28.657 Number of Power States: 1 00:12:28.657 Current Power State: Power State #0 00:12:28.657 Power State #0: 00:12:28.657 Max Power: 0.00 W 00:12:28.657 Non-Operational State: Operational 00:12:28.657 Entry Latency: Not Reported 00:12:28.657 Exit Latency: Not Reported 00:12:28.657 Relative Read Throughput: 0 00:12:28.657 Relative Read Latency: 0 00:12:28.657 Relative Write Throughput: 0 00:12:28.657 Relative Write Latency: 0 00:12:28.657 Idle Power: Not Reported 00:12:28.657 Active Power: Not Reported 00:12:28.657 Non-Operational Permissive Mode: Not Supported 00:12:28.657 00:12:28.657 Health Information 00:12:28.657 ================== 00:12:28.657 Critical Warnings: 00:12:28.657 Available Spare Space: OK 00:12:28.657 Temperature: OK 00:12:28.657 Device Reliability: OK 00:12:28.657 Read Only: No 00:12:28.657 Volatile Memory Backup: OK 00:12:28.657 Current Temperature: 0 Kelvin (-2[2024-04-26 23:56:58.758976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:28.657 [2024-04-26 23:56:58.766844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:28.657 [2024-04-26 23:56:58.766872] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:28.657 [2024-04-26 23:56:58.766881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.657 [2024-04-26 23:56:58.766887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.657 [2024-04-26 23:56:58.766894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.657 [2024-04-26 23:56:58.766900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.658 [2024-04-26 23:56:58.766942] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:28.658 [2024-04-26 23:56:58.766952] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:28.658 [2024-04-26 23:56:58.767949] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:28.658 [2024-04-26 23:56:58.767997] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:28.658 [2024-04-26 23:56:58.768003] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:28.658 [2024-04-26 23:56:58.768956] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:28.658 [2024-04-26 23:56:58.768967] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:28.658 [2024-04-26 23:56:58.769015] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:28.658 [2024-04-26 23:56:58.770391] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:28.658 73 Celsius) 00:12:28.658 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:28.658 Available Spare: 0% 00:12:28.658 Available Spare Threshold: 0% 00:12:28.658 Life Percentage Used: 0% 00:12:28.658 Data Units Read: 0 00:12:28.658 Data Units Written: 0 00:12:28.658 Host Read Commands: 0 00:12:28.658 Host Write Commands: 0 00:12:28.658 Controller Busy Time: 0 minutes 00:12:28.658 Power Cycles: 0 00:12:28.658 Power On Hours: 0 hours 00:12:28.658 Unsafe Shutdowns: 0 00:12:28.658 Unrecoverable Media Errors: 0 00:12:28.658 Lifetime Error Log Entries: 0 00:12:28.658 Warning Temperature Time: 0 minutes 00:12:28.658 Critical Temperature Time: 0 minutes 00:12:28.658 00:12:28.658 Number of Queues 00:12:28.658 ================ 00:12:28.658 Number of I/O Submission Queues: 127 00:12:28.658 Number of I/O Completion Queues: 127 00:12:28.658 00:12:28.658 Active Namespaces 00:12:28.658 ================= 00:12:28.658 Namespace ID:1 00:12:28.658 Error Recovery Timeout: Unlimited 00:12:28.658 Command Set Identifier: NVM (00h) 00:12:28.658 Deallocate: Supported 00:12:28.658 Deallocated/Unwritten Error: Not Supported 00:12:28.658 Deallocated Read Value: Unknown 00:12:28.658 Deallocate in Write Zeroes: Not Supported 00:12:28.658 Deallocated Guard Field: 0xFFFF 00:12:28.658 Flush: Supported 00:12:28.658 Reservation: Supported 00:12:28.658 Namespace Sharing Capabilities: Multiple Controllers 00:12:28.658 Size (in LBAs): 131072 (0GiB) 00:12:28.658 Capacity (in LBAs): 131072 (0GiB) 00:12:28.658 Utilization (in LBAs): 131072 (0GiB) 00:12:28.658 NGUID: D41C41CBC3D74AA18E31E6BB441EC237 00:12:28.658 UUID: d41c41cb-c3d7-4aa1-8e31-e6bb441ec237 00:12:28.658 Thin Provisioning: Not Supported 00:12:28.658 Per-NS Atomic Units: Yes 00:12:28.658 Atomic Boundary Size (Normal): 0 00:12:28.658 Atomic Boundary Size (PFail): 0 00:12:28.658 Atomic Boundary Offset: 0 00:12:28.658 Maximum Single Source Range Length: 65535 00:12:28.658 Maximum Copy Length: 65535 00:12:28.658 Maximum Source Range Count: 1 00:12:28.658 NGUID/EUI64 Never Reused: No 00:12:28.658 Namespace Write Protected: No 00:12:28.658 Number of LBA Formats: 1 00:12:28.658 Current LBA Format: LBA Format #00 00:12:28.658 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:28.658 00:12:28.658 23:56:58 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:28.658 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.917 [2024-04-26 23:56:58.972199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:34.198 [2024-04-26 23:57:04.078052] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:34.198 Initializing NVMe Controllers 00:12:34.198 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:34.198 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:34.198 Initialization complete. Launching workers. 00:12:34.198 ======================================================== 00:12:34.198 Latency(us) 00:12:34.198 Device Information : IOPS MiB/s Average min max 00:12:34.198 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 46474.71 181.54 2755.88 936.86 5799.05 00:12:34.198 ======================================================== 00:12:34.198 Total : 46474.71 181.54 2755.88 936.86 5799.05 00:12:34.198 00:12:34.198 23:57:04 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:34.198 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.198 [2024-04-26 23:57:04.283690] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:39.530 [2024-04-26 23:57:09.301901] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:39.530 Initializing NVMe Controllers 00:12:39.530 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:39.530 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:39.530 Initialization complete. Launching workers. 00:12:39.530 ======================================================== 00:12:39.530 Latency(us) 00:12:39.530 Device Information : IOPS MiB/s Average min max 00:12:39.530 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34708.41 135.58 3687.17 1213.67 8603.65 00:12:39.530 ======================================================== 00:12:39.530 Total : 34708.41 135.58 3687.17 1213.67 8603.65 00:12:39.530 00:12:39.530 23:57:09 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:39.530 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.530 [2024-04-26 23:57:09.526103] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:44.819 [2024-04-26 23:57:14.673937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:44.819 Initializing NVMe Controllers 00:12:44.819 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:44.819 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:44.819 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:44.819 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:44.819 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:44.819 Initialization complete. Launching workers. 00:12:44.819 Starting thread on core 2 00:12:44.819 Starting thread on core 3 00:12:44.819 Starting thread on core 1 00:12:44.819 23:57:14 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:44.819 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.819 [2024-04-26 23:57:14.943302] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.032 [2024-04-26 23:57:18.388979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.032 Initializing NVMe Controllers 00:12:49.032 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.032 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.032 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:49.032 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:49.032 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:49.032 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:49.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:49.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:49.032 Initialization complete. Launching workers. 00:12:49.032 Starting thread on core 1 with urgent priority queue 00:12:49.032 Starting thread on core 2 with urgent priority queue 00:12:49.032 Starting thread on core 3 with urgent priority queue 00:12:49.032 Starting thread on core 0 with urgent priority queue 00:12:49.032 SPDK bdev Controller (SPDK2 ) core 0: 4640.00 IO/s 21.55 secs/100000 ios 00:12:49.032 SPDK bdev Controller (SPDK2 ) core 1: 9554.33 IO/s 10.47 secs/100000 ios 00:12:49.032 SPDK bdev Controller (SPDK2 ) core 2: 11743.00 IO/s 8.52 secs/100000 ios 00:12:49.032 SPDK bdev Controller (SPDK2 ) core 3: 3626.67 IO/s 27.57 secs/100000 ios 00:12:49.032 ======================================================== 00:12:49.032 00:12:49.032 23:57:18 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:49.032 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.032 [2024-04-26 23:57:18.650902] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.032 [2024-04-26 23:57:18.660963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.032 Initializing NVMe Controllers 00:12:49.032 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.032 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.032 Namespace ID: 1 size: 0GB 00:12:49.032 Initialization complete. 00:12:49.032 INFO: using host memory buffer for IO 00:12:49.032 Hello world! 00:12:49.032 23:57:18 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:49.032 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.032 [2024-04-26 23:57:18.915097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.976 Initializing NVMe Controllers 00:12:49.976 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.976 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.976 Initialization complete. Launching workers. 00:12:49.976 submit (in ns) avg, min, max = 7980.3, 3880.8, 4001513.3 00:12:49.976 complete (in ns) avg, min, max = 24685.6, 2360.8, 6991116.7 00:12:49.976 00:12:49.976 Submit histogram 00:12:49.976 ================ 00:12:49.976 Range in us Cumulative Count 00:12:49.976 3.867 - 3.893: 0.3742% ( 57) 00:12:49.976 3.893 - 3.920: 3.0918% ( 414) 00:12:49.976 3.920 - 3.947: 8.7502% ( 862) 00:12:49.976 3.947 - 3.973: 19.0954% ( 1576) 00:12:49.976 3.973 - 4.000: 31.2196% ( 1847) 00:12:49.976 4.000 - 4.027: 43.2060% ( 1826) 00:12:49.976 4.027 - 4.053: 58.6845% ( 2358) 00:12:49.976 4.053 - 4.080: 74.8392% ( 2461) 00:12:49.976 4.080 - 4.107: 87.6920% ( 1958) 00:12:49.976 4.107 - 4.133: 95.0243% ( 1117) 00:12:49.976 4.133 - 4.160: 98.2473% ( 491) 00:12:49.976 4.160 - 4.187: 99.1270% ( 134) 00:12:49.976 4.187 - 4.213: 99.2714% ( 22) 00:12:49.976 4.213 - 4.240: 99.3239% ( 8) 00:12:49.976 4.240 - 4.267: 99.3567% ( 5) 00:12:49.976 4.267 - 4.293: 99.3895% ( 5) 00:12:49.976 4.293 - 4.320: 99.4027% ( 2) 00:12:49.976 4.320 - 4.347: 99.4355% ( 5) 00:12:49.976 4.373 - 4.400: 99.4486% ( 2) 00:12:49.976 4.480 - 4.507: 99.4552% ( 1) 00:12:49.976 4.507 - 4.533: 99.4617% ( 1) 00:12:49.976 4.587 - 4.613: 99.4683% ( 1) 00:12:49.976 4.853 - 4.880: 99.4749% ( 1) 00:12:49.976 4.880 - 4.907: 99.4814% ( 1) 00:12:49.976 4.907 - 4.933: 99.4880% ( 1) 00:12:49.976 5.227 - 5.253: 99.4946% ( 1) 00:12:49.976 5.413 - 5.440: 99.5011% ( 1) 00:12:49.976 5.840 - 5.867: 99.5077% ( 1) 00:12:49.976 5.920 - 5.947: 99.5208% ( 2) 00:12:49.976 6.000 - 6.027: 99.5274% ( 1) 00:12:49.976 6.080 - 6.107: 99.5339% ( 1) 00:12:49.976 6.107 - 6.133: 99.5405% ( 1) 00:12:49.976 6.133 - 6.160: 99.5536% ( 2) 00:12:49.976 6.160 - 6.187: 99.5668% ( 2) 00:12:49.976 6.213 - 6.240: 99.5733% ( 1) 00:12:49.976 6.267 - 6.293: 99.5930% ( 3) 00:12:49.976 6.507 - 6.533: 99.5996% ( 1) 00:12:49.976 6.533 - 6.560: 99.6061% ( 1) 00:12:49.976 6.587 - 6.613: 99.6258% ( 3) 00:12:49.976 6.693 - 6.720: 99.6390% ( 2) 00:12:49.976 6.747 - 6.773: 99.6455% ( 1) 00:12:49.976 6.800 - 6.827: 99.6587% ( 2) 00:12:49.976 6.880 - 6.933: 99.6652% ( 1) 00:12:49.976 6.933 - 6.987: 99.6915% ( 4) 00:12:49.976 6.987 - 7.040: 99.7046% ( 2) 00:12:49.976 7.040 - 7.093: 99.7112% ( 1) 00:12:49.976 7.093 - 7.147: 99.7243% ( 2) 00:12:49.976 7.147 - 7.200: 99.7309% ( 1) 00:12:49.976 7.253 - 7.307: 99.7374% ( 1) 00:12:49.976 7.307 - 7.360: 99.7440% ( 1) 00:12:49.976 7.360 - 7.413: 99.7571% ( 2) 00:12:49.976 7.413 - 7.467: 99.7637% ( 1) 00:12:49.976 7.467 - 7.520: 99.7834% ( 3) 00:12:49.976 7.520 - 7.573: 99.7965% ( 2) 00:12:49.976 7.573 - 7.627: 99.8096% ( 2) 00:12:49.976 7.627 - 7.680: 99.8162% ( 1) 00:12:49.976 7.840 - 7.893: 99.8359% ( 3) 00:12:49.976 7.893 - 7.947: 99.8425% ( 1) 00:12:49.976 8.107 - 8.160: 99.8490% ( 1) 00:12:49.976 8.160 - 8.213: 99.8556% ( 1) 00:12:49.976 8.213 - 8.267: 99.8622% ( 1) 00:12:49.976 8.267 - 8.320: 99.8687% ( 1) 00:12:49.976 8.373 - 8.427: 99.8753% ( 1) 00:12:49.976 8.853 - 8.907: 99.8818% ( 1) 00:12:49.976 9.173 - 9.227: 99.8884% ( 1) 00:12:49.976 9.280 - 9.333: 99.8950% ( 1) 00:12:49.976 9.867 - 9.920: 99.9015% ( 1) 00:12:49.976 3986.773 - 4014.080: 100.0000% ( 15) 00:12:49.976 00:12:49.976 Complete histogram 00:12:49.976 ================== 00:12:49.976 Range in us Cumulative Count 00:12:49.976 2.360 - 2.373: 2.7832% ( 424) 00:12:49.976 2.373 - 2.387: 3.1180% ( 51) 00:12:49.976 2.387 - 2.400: 3.7416% ( 95) 00:12:49.976 2.400 - 2.413: 4.0239% ( 43) 00:12:49.976 2.413 - 2.427: 6.5314% ( 382) 00:12:49.976 2.427 - 2.440: 53.0393% ( 7085) 00:12:49.976 2.440 - 2.453: 61.3430% ( 1265) 00:12:49.976 2.453 - 2.467: 74.6422% ( 2026) 00:12:49.976 2.467 - 2.480: 80.1694% ( 842) 00:12:49.976 2.480 - [2024-04-26 23:57:20.013508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.976 2.493: 82.2765% ( 321) 00:12:49.976 2.493 - 2.507: 84.3967% ( 323) 00:12:49.976 2.507 - 2.520: 89.6744% ( 804) 00:12:49.976 2.520 - 2.533: 93.8493% ( 636) 00:12:49.976 2.533 - 2.547: 96.5538% ( 412) 00:12:49.976 2.547 - 2.560: 98.0767% ( 232) 00:12:49.976 2.560 - 2.573: 98.9169% ( 128) 00:12:49.976 2.573 - 2.587: 99.2451% ( 50) 00:12:49.976 2.587 - 2.600: 99.2845% ( 6) 00:12:49.976 2.600 - 2.613: 99.2911% ( 1) 00:12:49.976 3.027 - 3.040: 99.2976% ( 1) 00:12:49.976 4.987 - 5.013: 99.3108% ( 2) 00:12:49.976 5.173 - 5.200: 99.3173% ( 1) 00:12:49.976 5.307 - 5.333: 99.3239% ( 1) 00:12:49.976 5.440 - 5.467: 99.3304% ( 1) 00:12:49.976 5.493 - 5.520: 99.3370% ( 1) 00:12:49.976 5.520 - 5.547: 99.3436% ( 1) 00:12:49.976 5.573 - 5.600: 99.3501% ( 1) 00:12:49.976 5.680 - 5.707: 99.3567% ( 1) 00:12:49.976 5.840 - 5.867: 99.3633% ( 1) 00:12:49.976 5.893 - 5.920: 99.3698% ( 1) 00:12:49.976 5.920 - 5.947: 99.3764% ( 1) 00:12:49.976 6.000 - 6.027: 99.3830% ( 1) 00:12:49.976 6.027 - 6.053: 99.3895% ( 1) 00:12:49.976 6.107 - 6.133: 99.3961% ( 1) 00:12:49.976 6.213 - 6.240: 99.4027% ( 1) 00:12:49.976 7.040 - 7.093: 99.4092% ( 1) 00:12:49.976 11.787 - 11.840: 99.4158% ( 1) 00:12:49.976 12.107 - 12.160: 99.4223% ( 1) 00:12:49.976 12.427 - 12.480: 99.4289% ( 1) 00:12:49.976 13.013 - 13.067: 99.4355% ( 1) 00:12:49.976 13.653 - 13.760: 99.4420% ( 1) 00:12:49.976 14.827 - 14.933: 99.4486% ( 1) 00:12:49.976 3986.773 - 4014.080: 99.9934% ( 83) 00:12:49.976 6990.507 - 7045.120: 100.0000% ( 1) 00:12:49.976 00:12:49.976 23:57:20 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:49.976 23:57:20 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:49.976 23:57:20 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:49.976 23:57:20 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:49.976 23:57:20 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:50.238 [ 00:12:50.238 { 00:12:50.238 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:50.238 "subtype": "Discovery", 00:12:50.238 "listen_addresses": [], 00:12:50.238 "allow_any_host": true, 00:12:50.238 "hosts": [] 00:12:50.238 }, 00:12:50.238 { 00:12:50.238 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:50.238 "subtype": "NVMe", 00:12:50.238 "listen_addresses": [ 00:12:50.238 { 00:12:50.238 "transport": "VFIOUSER", 00:12:50.238 "trtype": "VFIOUSER", 00:12:50.238 "adrfam": "IPv4", 00:12:50.238 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:50.238 "trsvcid": "0" 00:12:50.238 } 00:12:50.238 ], 00:12:50.238 "allow_any_host": true, 00:12:50.238 "hosts": [], 00:12:50.238 "serial_number": "SPDK1", 00:12:50.238 "model_number": "SPDK bdev Controller", 00:12:50.238 "max_namespaces": 32, 00:12:50.238 "min_cntlid": 1, 00:12:50.238 "max_cntlid": 65519, 00:12:50.238 "namespaces": [ 00:12:50.238 { 00:12:50.238 "nsid": 1, 00:12:50.238 "bdev_name": "Malloc1", 00:12:50.238 "name": "Malloc1", 00:12:50.238 "nguid": "FC07B587190E4C82975E1BF58F0AC033", 00:12:50.238 "uuid": "fc07b587-190e-4c82-975e-1bf58f0ac033" 00:12:50.238 }, 00:12:50.238 { 00:12:50.238 "nsid": 2, 00:12:50.238 "bdev_name": "Malloc3", 00:12:50.238 "name": "Malloc3", 00:12:50.238 "nguid": "B176076BF71A4247B08D88FE4554E069", 00:12:50.238 "uuid": "b176076b-f71a-4247-b08d-88fe4554e069" 00:12:50.238 } 00:12:50.238 ] 00:12:50.238 }, 00:12:50.238 { 00:12:50.238 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:50.238 "subtype": "NVMe", 00:12:50.238 "listen_addresses": [ 00:12:50.238 { 00:12:50.238 "transport": "VFIOUSER", 00:12:50.238 "trtype": "VFIOUSER", 00:12:50.238 "adrfam": "IPv4", 00:12:50.238 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:50.238 "trsvcid": "0" 00:12:50.238 } 00:12:50.238 ], 00:12:50.238 "allow_any_host": true, 00:12:50.238 "hosts": [], 00:12:50.238 "serial_number": "SPDK2", 00:12:50.238 "model_number": "SPDK bdev Controller", 00:12:50.238 "max_namespaces": 32, 00:12:50.238 "min_cntlid": 1, 00:12:50.238 "max_cntlid": 65519, 00:12:50.238 "namespaces": [ 00:12:50.238 { 00:12:50.238 "nsid": 1, 00:12:50.238 "bdev_name": "Malloc2", 00:12:50.238 "name": "Malloc2", 00:12:50.238 "nguid": "D41C41CBC3D74AA18E31E6BB441EC237", 00:12:50.238 "uuid": "d41c41cb-c3d7-4aa1-8e31-e6bb441ec237" 00:12:50.238 } 00:12:50.238 ] 00:12:50.238 } 00:12:50.238 ] 00:12:50.238 23:57:20 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:50.238 23:57:20 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:50.238 23:57:20 -- target/nvmf_vfio_user.sh@34 -- # aerpid=317140 00:12:50.238 23:57:20 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:50.238 23:57:20 -- common/autotest_common.sh@1251 -- # local i=0 00:12:50.238 23:57:20 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:50.238 23:57:20 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:50.238 23:57:20 -- common/autotest_common.sh@1262 -- # return 0 00:12:50.238 23:57:20 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:50.238 23:57:20 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:50.238 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.238 [2024-04-26 23:57:20.386205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:50.238 Malloc4 00:12:50.238 23:57:20 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:50.500 [2024-04-26 23:57:20.559304] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:50.500 23:57:20 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:50.500 Asynchronous Event Request test 00:12:50.500 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.500 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.500 Registering asynchronous event callbacks... 00:12:50.500 Starting namespace attribute notice tests for all controllers... 00:12:50.500 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:50.500 aer_cb - Changed Namespace 00:12:50.500 Cleaning up... 00:12:50.500 [ 00:12:50.500 { 00:12:50.500 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:50.500 "subtype": "Discovery", 00:12:50.500 "listen_addresses": [], 00:12:50.500 "allow_any_host": true, 00:12:50.500 "hosts": [] 00:12:50.500 }, 00:12:50.500 { 00:12:50.500 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:50.500 "subtype": "NVMe", 00:12:50.500 "listen_addresses": [ 00:12:50.500 { 00:12:50.500 "transport": "VFIOUSER", 00:12:50.500 "trtype": "VFIOUSER", 00:12:50.500 "adrfam": "IPv4", 00:12:50.500 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:50.500 "trsvcid": "0" 00:12:50.500 } 00:12:50.500 ], 00:12:50.500 "allow_any_host": true, 00:12:50.500 "hosts": [], 00:12:50.500 "serial_number": "SPDK1", 00:12:50.500 "model_number": "SPDK bdev Controller", 00:12:50.500 "max_namespaces": 32, 00:12:50.500 "min_cntlid": 1, 00:12:50.500 "max_cntlid": 65519, 00:12:50.500 "namespaces": [ 00:12:50.500 { 00:12:50.500 "nsid": 1, 00:12:50.500 "bdev_name": "Malloc1", 00:12:50.500 "name": "Malloc1", 00:12:50.500 "nguid": "FC07B587190E4C82975E1BF58F0AC033", 00:12:50.500 "uuid": "fc07b587-190e-4c82-975e-1bf58f0ac033" 00:12:50.500 }, 00:12:50.500 { 00:12:50.500 "nsid": 2, 00:12:50.500 "bdev_name": "Malloc3", 00:12:50.500 "name": "Malloc3", 00:12:50.500 "nguid": "B176076BF71A4247B08D88FE4554E069", 00:12:50.500 "uuid": "b176076b-f71a-4247-b08d-88fe4554e069" 00:12:50.500 } 00:12:50.500 ] 00:12:50.500 }, 00:12:50.500 { 00:12:50.500 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:50.500 "subtype": "NVMe", 00:12:50.500 "listen_addresses": [ 00:12:50.500 { 00:12:50.500 "transport": "VFIOUSER", 00:12:50.500 "trtype": "VFIOUSER", 00:12:50.500 "adrfam": "IPv4", 00:12:50.500 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:50.500 "trsvcid": "0" 00:12:50.500 } 00:12:50.500 ], 00:12:50.500 "allow_any_host": true, 00:12:50.500 "hosts": [], 00:12:50.500 "serial_number": "SPDK2", 00:12:50.500 "model_number": "SPDK bdev Controller", 00:12:50.500 "max_namespaces": 32, 00:12:50.500 "min_cntlid": 1, 00:12:50.500 "max_cntlid": 65519, 00:12:50.500 "namespaces": [ 00:12:50.500 { 00:12:50.500 "nsid": 1, 00:12:50.500 "bdev_name": "Malloc2", 00:12:50.500 "name": "Malloc2", 00:12:50.500 "nguid": "D41C41CBC3D74AA18E31E6BB441EC237", 00:12:50.500 "uuid": "d41c41cb-c3d7-4aa1-8e31-e6bb441ec237" 00:12:50.500 }, 00:12:50.500 { 00:12:50.500 "nsid": 2, 00:12:50.500 "bdev_name": "Malloc4", 00:12:50.500 "name": "Malloc4", 00:12:50.500 "nguid": "324C3318804F46448A1789717CA19358", 00:12:50.500 "uuid": "324c3318-804f-4644-8a17-89717ca19358" 00:12:50.500 } 00:12:50.500 ] 00:12:50.500 } 00:12:50.500 ] 00:12:50.762 23:57:20 -- target/nvmf_vfio_user.sh@44 -- # wait 317140 00:12:50.762 23:57:20 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:50.762 23:57:20 -- target/nvmf_vfio_user.sh@95 -- # killprocess 307474 00:12:50.762 23:57:20 -- common/autotest_common.sh@936 -- # '[' -z 307474 ']' 00:12:50.762 23:57:20 -- common/autotest_common.sh@940 -- # kill -0 307474 00:12:50.762 23:57:20 -- common/autotest_common.sh@941 -- # uname 00:12:50.762 23:57:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:50.762 23:57:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 307474 00:12:50.762 23:57:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:50.762 23:57:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:50.762 23:57:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 307474' 00:12:50.762 killing process with pid 307474 00:12:50.762 23:57:20 -- common/autotest_common.sh@955 -- # kill 307474 00:12:50.762 [2024-04-26 23:57:20.801042] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:12:50.762 23:57:20 -- common/autotest_common.sh@960 -- # wait 307474 00:12:50.762 23:57:20 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:50.762 23:57:20 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:50.762 23:57:20 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:50.763 23:57:20 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:50.763 23:57:20 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:51.025 23:57:20 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=317233 00:12:51.025 23:57:20 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 317233' 00:12:51.025 Process pid: 317233 00:12:51.025 23:57:20 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:51.025 23:57:20 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:51.025 23:57:20 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 317233 00:12:51.025 23:57:20 -- common/autotest_common.sh@817 -- # '[' -z 317233 ']' 00:12:51.025 23:57:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.025 23:57:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:51.025 23:57:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.025 23:57:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:51.025 23:57:20 -- common/autotest_common.sh@10 -- # set +x 00:12:51.025 [2024-04-26 23:57:21.029692] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:51.025 [2024-04-26 23:57:21.030641] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:12:51.025 [2024-04-26 23:57:21.030682] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.025 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.025 [2024-04-26 23:57:21.092728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.025 [2024-04-26 23:57:21.157830] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.025 [2024-04-26 23:57:21.157873] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.025 [2024-04-26 23:57:21.157881] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.025 [2024-04-26 23:57:21.157887] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.025 [2024-04-26 23:57:21.157893] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.025 [2024-04-26 23:57:21.158049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.025 [2024-04-26 23:57:21.158186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.025 [2024-04-26 23:57:21.158369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.025 [2024-04-26 23:57:21.158370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.025 [2024-04-26 23:57:21.228640] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:12:51.025 [2024-04-26 23:57:21.228733] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:12:51.025 [2024-04-26 23:57:21.229087] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:12:51.025 [2024-04-26 23:57:21.229251] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:51.025 [2024-04-26 23:57:21.229339] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:12:51.599 23:57:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:51.599 23:57:21 -- common/autotest_common.sh@850 -- # return 0 00:12:51.599 23:57:21 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:52.984 23:57:22 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:52.984 23:57:22 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:52.984 23:57:22 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:52.984 23:57:22 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:52.984 23:57:22 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:52.984 23:57:22 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:52.984 Malloc1 00:12:52.984 23:57:23 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:53.245 23:57:23 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:53.245 23:57:23 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:53.506 23:57:23 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:53.506 23:57:23 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:53.506 23:57:23 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:53.767 Malloc2 00:12:53.767 23:57:23 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:53.767 23:57:23 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:54.028 23:57:24 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:54.290 23:57:24 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:54.290 23:57:24 -- target/nvmf_vfio_user.sh@95 -- # killprocess 317233 00:12:54.290 23:57:24 -- common/autotest_common.sh@936 -- # '[' -z 317233 ']' 00:12:54.290 23:57:24 -- common/autotest_common.sh@940 -- # kill -0 317233 00:12:54.290 23:57:24 -- common/autotest_common.sh@941 -- # uname 00:12:54.290 23:57:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:54.290 23:57:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 317233 00:12:54.290 23:57:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:54.290 23:57:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:54.290 23:57:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 317233' 00:12:54.290 killing process with pid 317233 00:12:54.290 23:57:24 -- common/autotest_common.sh@955 -- # kill 317233 00:12:54.290 23:57:24 -- common/autotest_common.sh@960 -- # wait 317233 00:12:54.290 23:57:24 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:54.551 23:57:24 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:54.551 00:12:54.551 real 0m50.957s 00:12:54.551 user 3m22.057s 00:12:54.551 sys 0m2.960s 00:12:54.551 23:57:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:54.551 23:57:24 -- common/autotest_common.sh@10 -- # set +x 00:12:54.551 ************************************ 00:12:54.551 END TEST nvmf_vfio_user 00:12:54.551 ************************************ 00:12:54.551 23:57:24 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:54.551 23:57:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:54.551 23:57:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:54.551 23:57:24 -- common/autotest_common.sh@10 -- # set +x 00:12:54.551 ************************************ 00:12:54.551 START TEST nvmf_vfio_user_nvme_compliance 00:12:54.551 ************************************ 00:12:54.551 23:57:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:54.814 * Looking for test storage... 00:12:54.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:54.814 23:57:24 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.814 23:57:24 -- nvmf/common.sh@7 -- # uname -s 00:12:54.814 23:57:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.814 23:57:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.814 23:57:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.815 23:57:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.815 23:57:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.815 23:57:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.815 23:57:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.815 23:57:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.815 23:57:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.815 23:57:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.815 23:57:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:54.815 23:57:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:54.815 23:57:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.815 23:57:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.815 23:57:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.815 23:57:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.815 23:57:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.815 23:57:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.815 23:57:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.815 23:57:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.815 23:57:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.815 23:57:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.815 23:57:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.815 23:57:24 -- paths/export.sh@5 -- # export PATH 00:12:54.815 23:57:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.815 23:57:24 -- nvmf/common.sh@47 -- # : 0 00:12:54.815 23:57:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.815 23:57:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.815 23:57:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.815 23:57:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.815 23:57:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.815 23:57:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.815 23:57:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.815 23:57:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.815 23:57:24 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.815 23:57:24 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.815 23:57:24 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:54.815 23:57:24 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:54.815 23:57:24 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:54.815 23:57:24 -- compliance/compliance.sh@20 -- # nvmfpid=318212 00:12:54.815 23:57:24 -- compliance/compliance.sh@21 -- # echo 'Process pid: 318212' 00:12:54.815 Process pid: 318212 00:12:54.815 23:57:24 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:54.815 23:57:24 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:54.815 23:57:24 -- compliance/compliance.sh@24 -- # waitforlisten 318212 00:12:54.815 23:57:24 -- common/autotest_common.sh@817 -- # '[' -z 318212 ']' 00:12:54.815 23:57:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.815 23:57:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:54.815 23:57:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.815 23:57:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:54.815 23:57:24 -- common/autotest_common.sh@10 -- # set +x 00:12:54.815 [2024-04-26 23:57:24.896748] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:12:54.815 [2024-04-26 23:57:24.896813] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.815 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.815 [2024-04-26 23:57:24.963013] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:55.076 [2024-04-26 23:57:25.035639] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.076 [2024-04-26 23:57:25.035679] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.076 [2024-04-26 23:57:25.035687] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.076 [2024-04-26 23:57:25.035693] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.076 [2024-04-26 23:57:25.035699] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.076 [2024-04-26 23:57:25.035750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.076 [2024-04-26 23:57:25.035899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.076 [2024-04-26 23:57:25.035901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.646 23:57:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:55.646 23:57:25 -- common/autotest_common.sh@850 -- # return 0 00:12:55.646 23:57:25 -- compliance/compliance.sh@26 -- # sleep 1 00:12:56.588 23:57:26 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:56.588 23:57:26 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:56.588 23:57:26 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:56.588 23:57:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.588 23:57:26 -- common/autotest_common.sh@10 -- # set +x 00:12:56.588 23:57:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.588 23:57:26 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:56.589 23:57:26 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:56.589 23:57:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.589 23:57:26 -- common/autotest_common.sh@10 -- # set +x 00:12:56.589 malloc0 00:12:56.589 23:57:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.589 23:57:26 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:56.589 23:57:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.589 23:57:26 -- common/autotest_common.sh@10 -- # set +x 00:12:56.589 23:57:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.589 23:57:26 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:56.589 23:57:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.589 23:57:26 -- common/autotest_common.sh@10 -- # set +x 00:12:56.589 23:57:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.589 23:57:26 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:56.589 23:57:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.589 23:57:26 -- common/autotest_common.sh@10 -- # set +x 00:12:56.589 23:57:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.589 23:57:26 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:56.850 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.850 00:12:56.850 00:12:56.850 CUnit - A unit testing framework for C - Version 2.1-3 00:12:56.850 http://cunit.sourceforge.net/ 00:12:56.850 00:12:56.850 00:12:56.850 Suite: nvme_compliance 00:12:56.850 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-26 23:57:26.937292] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.850 [2024-04-26 23:57:26.938656] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:56.850 [2024-04-26 23:57:26.938670] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:56.850 [2024-04-26 23:57:26.938676] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:56.850 [2024-04-26 23:57:26.941319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.850 passed 00:12:56.850 Test: admin_identify_ctrlr_verify_fused ...[2024-04-26 23:57:27.035929] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.850 [2024-04-26 23:57:27.038940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.111 passed 00:12:57.111 Test: admin_identify_ns ...[2024-04-26 23:57:27.137110] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.111 [2024-04-26 23:57:27.196851] ctrlr.c:2670:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:57.111 [2024-04-26 23:57:27.204858] ctrlr.c:2670:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:57.111 [2024-04-26 23:57:27.225959] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.111 passed 00:12:57.111 Test: admin_get_features_mandatory_features ...[2024-04-26 23:57:27.317602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.111 [2024-04-26 23:57:27.320623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.372 passed 00:12:57.372 Test: admin_get_features_optional_features ...[2024-04-26 23:57:27.414177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.372 [2024-04-26 23:57:27.417193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.372 passed 00:12:57.372 Test: admin_set_features_number_of_queues ...[2024-04-26 23:57:27.511291] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.637 [2024-04-26 23:57:27.615940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.637 passed 00:12:57.637 Test: admin_get_log_page_mandatory_logs ...[2024-04-26 23:57:27.707936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.637 [2024-04-26 23:57:27.710961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.637 passed 00:12:57.637 Test: admin_get_log_page_with_lpo ...[2024-04-26 23:57:27.804095] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.898 [2024-04-26 23:57:27.875847] ctrlr.c:2618:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:57.898 [2024-04-26 23:57:27.888899] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.898 passed 00:12:57.898 Test: fabric_property_get ...[2024-04-26 23:57:27.978532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.898 [2024-04-26 23:57:27.979776] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:57.898 [2024-04-26 23:57:27.981554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.898 passed 00:12:57.898 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-26 23:57:28.075111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.898 [2024-04-26 23:57:28.076350] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:57.898 [2024-04-26 23:57:28.078132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.898 passed 00:12:58.160 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-26 23:57:28.172236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.160 [2024-04-26 23:57:28.255846] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:58.160 [2024-04-26 23:57:28.271846] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:58.160 [2024-04-26 23:57:28.276930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.160 passed 00:12:58.160 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-26 23:57:28.370963] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.160 [2024-04-26 23:57:28.372183] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:58.160 [2024-04-26 23:57:28.373978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.421 passed 00:12:58.421 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-26 23:57:28.466116] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.421 [2024-04-26 23:57:28.541843] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:58.421 [2024-04-26 23:57:28.565842] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:58.421 [2024-04-26 23:57:28.570919] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.421 passed 00:12:58.683 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-26 23:57:28.664912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.683 [2024-04-26 23:57:28.666136] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:58.683 [2024-04-26 23:57:28.666154] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:58.683 [2024-04-26 23:57:28.667932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.683 passed 00:12:58.683 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-26 23:57:28.761067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.683 [2024-04-26 23:57:28.852849] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:58.683 [2024-04-26 23:57:28.860844] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:58.683 [2024-04-26 23:57:28.868845] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:58.683 [2024-04-26 23:57:28.876853] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:58.944 [2024-04-26 23:57:28.905930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.944 passed 00:12:58.944 Test: admin_create_io_sq_verify_pc ...[2024-04-26 23:57:28.999903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.944 [2024-04-26 23:57:29.015851] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:58.944 [2024-04-26 23:57:29.036069] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.944 passed 00:12:58.944 Test: admin_create_io_qp_max_qps ...[2024-04-26 23:57:29.127661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:00.327 [2024-04-26 23:57:30.225847] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:00.587 [2024-04-26 23:57:30.621985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.587 passed 00:13:00.588 Test: admin_create_io_sq_shared_cq ...[2024-04-26 23:57:30.715086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:00.849 [2024-04-26 23:57:30.846844] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:00.849 [2024-04-26 23:57:30.883897] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.849 passed 00:13:00.849 00:13:00.849 Run Summary: Type Total Ran Passed Failed Inactive 00:13:00.849 suites 1 1 n/a 0 0 00:13:00.849 tests 18 18 18 0 0 00:13:00.849 asserts 360 360 360 0 n/a 00:13:00.849 00:13:00.849 Elapsed time = 1.655 seconds 00:13:00.849 23:57:30 -- compliance/compliance.sh@42 -- # killprocess 318212 00:13:00.849 23:57:30 -- common/autotest_common.sh@936 -- # '[' -z 318212 ']' 00:13:00.849 23:57:30 -- common/autotest_common.sh@940 -- # kill -0 318212 00:13:00.849 23:57:30 -- common/autotest_common.sh@941 -- # uname 00:13:00.849 23:57:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:00.849 23:57:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 318212 00:13:00.849 23:57:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:00.849 23:57:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:00.849 23:57:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 318212' 00:13:00.849 killing process with pid 318212 00:13:00.849 23:57:30 -- common/autotest_common.sh@955 -- # kill 318212 00:13:00.849 23:57:30 -- common/autotest_common.sh@960 -- # wait 318212 00:13:01.110 23:57:31 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:01.110 23:57:31 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:01.110 00:13:01.110 real 0m6.427s 00:13:01.110 user 0m18.389s 00:13:01.110 sys 0m0.468s 00:13:01.110 23:57:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:01.110 23:57:31 -- common/autotest_common.sh@10 -- # set +x 00:13:01.110 ************************************ 00:13:01.110 END TEST nvmf_vfio_user_nvme_compliance 00:13:01.110 ************************************ 00:13:01.110 23:57:31 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:01.110 23:57:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:01.110 23:57:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.110 23:57:31 -- common/autotest_common.sh@10 -- # set +x 00:13:01.110 ************************************ 00:13:01.110 START TEST nvmf_vfio_user_fuzz 00:13:01.110 ************************************ 00:13:01.110 23:57:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:01.371 * Looking for test storage... 00:13:01.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.371 23:57:31 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.371 23:57:31 -- nvmf/common.sh@7 -- # uname -s 00:13:01.371 23:57:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.371 23:57:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.371 23:57:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.371 23:57:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.371 23:57:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.371 23:57:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.371 23:57:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.371 23:57:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.371 23:57:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.371 23:57:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.371 23:57:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:01.371 23:57:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:01.371 23:57:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.371 23:57:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.371 23:57:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.371 23:57:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.371 23:57:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.371 23:57:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.371 23:57:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.371 23:57:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.371 23:57:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.371 23:57:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.371 23:57:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.371 23:57:31 -- paths/export.sh@5 -- # export PATH 00:13:01.371 23:57:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.371 23:57:31 -- nvmf/common.sh@47 -- # : 0 00:13:01.371 23:57:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.371 23:57:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.371 23:57:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.371 23:57:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.371 23:57:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.371 23:57:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.371 23:57:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.371 23:57:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.371 23:57:31 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:01.372 23:57:31 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:01.372 23:57:31 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:01.372 23:57:31 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:01.372 23:57:31 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:01.372 23:57:31 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:01.372 23:57:31 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:01.372 23:57:31 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:01.372 23:57:31 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=319497 00:13:01.372 23:57:31 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 319497' 00:13:01.372 Process pid: 319497 00:13:01.372 23:57:31 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:01.372 23:57:31 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 319497 00:13:01.372 23:57:31 -- common/autotest_common.sh@817 -- # '[' -z 319497 ']' 00:13:01.372 23:57:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.372 23:57:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:01.372 23:57:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.372 23:57:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:01.372 23:57:31 -- common/autotest_common.sh@10 -- # set +x 00:13:02.314 23:57:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:02.314 23:57:32 -- common/autotest_common.sh@850 -- # return 0 00:13:02.314 23:57:32 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:03.252 23:57:33 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:03.252 23:57:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.252 23:57:33 -- common/autotest_common.sh@10 -- # set +x 00:13:03.252 23:57:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.252 23:57:33 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:03.252 23:57:33 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:03.252 23:57:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.252 23:57:33 -- common/autotest_common.sh@10 -- # set +x 00:13:03.252 malloc0 00:13:03.252 23:57:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.252 23:57:33 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:03.252 23:57:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.252 23:57:33 -- common/autotest_common.sh@10 -- # set +x 00:13:03.252 23:57:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.252 23:57:33 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:03.252 23:57:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.252 23:57:33 -- common/autotest_common.sh@10 -- # set +x 00:13:03.252 23:57:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.252 23:57:33 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:03.252 23:57:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.252 23:57:33 -- common/autotest_common.sh@10 -- # set +x 00:13:03.252 23:57:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.252 23:57:33 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:03.252 23:57:33 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:35.367 Fuzzing completed. Shutting down the fuzz application 00:13:35.367 00:13:35.367 Dumping successful admin opcodes: 00:13:35.367 8, 9, 10, 24, 00:13:35.367 Dumping successful io opcodes: 00:13:35.367 0, 00:13:35.367 NS: 0x200003a1ef00 I/O qp, Total commands completed: 931120, total successful commands: 3632, random_seed: 2816233024 00:13:35.367 NS: 0x200003a1ef00 admin qp, Total commands completed: 229744, total successful commands: 1838, random_seed: 509394112 00:13:35.367 23:58:03 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:35.367 23:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.367 23:58:03 -- common/autotest_common.sh@10 -- # set +x 00:13:35.367 23:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.367 23:58:03 -- target/vfio_user_fuzz.sh@46 -- # killprocess 319497 00:13:35.367 23:58:03 -- common/autotest_common.sh@936 -- # '[' -z 319497 ']' 00:13:35.367 23:58:03 -- common/autotest_common.sh@940 -- # kill -0 319497 00:13:35.367 23:58:03 -- common/autotest_common.sh@941 -- # uname 00:13:35.367 23:58:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:35.367 23:58:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 319497 00:13:35.367 23:58:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:35.368 23:58:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:35.368 23:58:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 319497' 00:13:35.368 killing process with pid 319497 00:13:35.368 23:58:03 -- common/autotest_common.sh@955 -- # kill 319497 00:13:35.368 23:58:03 -- common/autotest_common.sh@960 -- # wait 319497 00:13:35.368 23:58:03 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:35.368 23:58:03 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:35.368 00:13:35.368 real 0m32.669s 00:13:35.368 user 0m35.760s 00:13:35.368 sys 0m23.752s 00:13:35.368 23:58:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:35.368 23:58:03 -- common/autotest_common.sh@10 -- # set +x 00:13:35.368 ************************************ 00:13:35.368 END TEST nvmf_vfio_user_fuzz 00:13:35.368 ************************************ 00:13:35.368 23:58:04 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:35.368 23:58:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:35.368 23:58:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:35.368 23:58:04 -- common/autotest_common.sh@10 -- # set +x 00:13:35.368 ************************************ 00:13:35.368 START TEST nvmf_host_management 00:13:35.368 ************************************ 00:13:35.368 23:58:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:35.368 * Looking for test storage... 00:13:35.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.368 23:58:04 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.368 23:58:04 -- nvmf/common.sh@7 -- # uname -s 00:13:35.368 23:58:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.368 23:58:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.368 23:58:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.368 23:58:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.368 23:58:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.368 23:58:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.368 23:58:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.368 23:58:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.368 23:58:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.368 23:58:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.368 23:58:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:35.368 23:58:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:35.368 23:58:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.368 23:58:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.368 23:58:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.368 23:58:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.368 23:58:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.368 23:58:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.368 23:58:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.368 23:58:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.368 23:58:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.368 23:58:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.368 23:58:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.368 23:58:04 -- paths/export.sh@5 -- # export PATH 00:13:35.368 23:58:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.368 23:58:04 -- nvmf/common.sh@47 -- # : 0 00:13:35.368 23:58:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.368 23:58:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.368 23:58:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.368 23:58:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.368 23:58:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.368 23:58:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.368 23:58:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.368 23:58:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.368 23:58:04 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.368 23:58:04 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.368 23:58:04 -- target/host_management.sh@105 -- # nvmftestinit 00:13:35.368 23:58:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:35.368 23:58:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.368 23:58:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:35.368 23:58:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:35.368 23:58:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:35.368 23:58:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.368 23:58:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.368 23:58:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.368 23:58:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:35.368 23:58:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:35.368 23:58:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.368 23:58:04 -- common/autotest_common.sh@10 -- # set +x 00:13:41.955 23:58:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:41.955 23:58:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:41.955 23:58:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:41.955 23:58:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:41.955 23:58:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:41.955 23:58:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:41.955 23:58:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:41.955 23:58:10 -- nvmf/common.sh@295 -- # net_devs=() 00:13:41.955 23:58:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:41.955 23:58:10 -- nvmf/common.sh@296 -- # e810=() 00:13:41.955 23:58:10 -- nvmf/common.sh@296 -- # local -ga e810 00:13:41.955 23:58:10 -- nvmf/common.sh@297 -- # x722=() 00:13:41.955 23:58:10 -- nvmf/common.sh@297 -- # local -ga x722 00:13:41.955 23:58:10 -- nvmf/common.sh@298 -- # mlx=() 00:13:41.955 23:58:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:41.955 23:58:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.955 23:58:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.955 23:58:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.955 23:58:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.955 23:58:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.955 23:58:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.955 23:58:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.955 23:58:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.955 23:58:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.955 23:58:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.955 23:58:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.955 23:58:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:41.955 23:58:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:41.955 23:58:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:41.955 23:58:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:41.955 23:58:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:41.955 23:58:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:41.955 23:58:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.955 23:58:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:41.955 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:41.955 23:58:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.955 23:58:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.955 23:58:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.955 23:58:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.955 23:58:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.955 23:58:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.955 23:58:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:41.955 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:41.955 23:58:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.955 23:58:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.955 23:58:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.955 23:58:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.955 23:58:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.955 23:58:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:41.955 23:58:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:41.955 23:58:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:41.955 23:58:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.955 23:58:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.955 23:58:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:41.955 23:58:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.955 23:58:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:41.955 Found net devices under 0000:31:00.0: cvl_0_0 00:13:41.955 23:58:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.955 23:58:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.955 23:58:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.955 23:58:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:41.955 23:58:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.956 23:58:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:41.956 Found net devices under 0000:31:00.1: cvl_0_1 00:13:41.956 23:58:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.956 23:58:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:41.956 23:58:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:41.956 23:58:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:41.956 23:58:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:41.956 23:58:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:41.956 23:58:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.956 23:58:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.956 23:58:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.956 23:58:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:41.956 23:58:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.956 23:58:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.956 23:58:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:41.956 23:58:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.956 23:58:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.956 23:58:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:41.956 23:58:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:41.956 23:58:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.956 23:58:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.956 23:58:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.956 23:58:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.956 23:58:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:41.956 23:58:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.956 23:58:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.956 23:58:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.956 23:58:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:41.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:13:41.956 00:13:41.956 --- 10.0.0.2 ping statistics --- 00:13:41.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.956 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:13:41.956 23:58:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:13:41.956 00:13:41.956 --- 10.0.0.1 ping statistics --- 00:13:41.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.956 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:13:41.956 23:58:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.956 23:58:11 -- nvmf/common.sh@411 -- # return 0 00:13:41.956 23:58:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:41.956 23:58:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.956 23:58:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:41.956 23:58:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:41.956 23:58:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.956 23:58:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:41.956 23:58:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:41.956 23:58:11 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:13:41.956 23:58:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:41.956 23:58:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.956 23:58:11 -- common/autotest_common.sh@10 -- # set +x 00:13:41.956 ************************************ 00:13:41.956 START TEST nvmf_host_management 00:13:41.956 ************************************ 00:13:41.956 23:58:11 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:13:41.956 23:58:11 -- target/host_management.sh@69 -- # starttarget 00:13:41.956 23:58:11 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:41.956 23:58:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:41.956 23:58:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:41.956 23:58:11 -- common/autotest_common.sh@10 -- # set +x 00:13:41.956 23:58:11 -- nvmf/common.sh@470 -- # nvmfpid=329697 00:13:41.956 23:58:11 -- nvmf/common.sh@471 -- # waitforlisten 329697 00:13:41.956 23:58:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:41.956 23:58:11 -- common/autotest_common.sh@817 -- # '[' -z 329697 ']' 00:13:41.956 23:58:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.956 23:58:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:41.956 23:58:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.956 23:58:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:41.956 23:58:11 -- common/autotest_common.sh@10 -- # set +x 00:13:41.956 [2024-04-26 23:58:11.574450] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:13:41.956 [2024-04-26 23:58:11.574519] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.956 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.956 [2024-04-26 23:58:11.648311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.956 [2024-04-26 23:58:11.713293] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.956 [2024-04-26 23:58:11.713329] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.956 [2024-04-26 23:58:11.713337] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.956 [2024-04-26 23:58:11.713343] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.956 [2024-04-26 23:58:11.713349] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.956 [2024-04-26 23:58:11.713453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.956 [2024-04-26 23:58:11.713609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.956 [2024-04-26 23:58:11.713763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.956 [2024-04-26 23:58:11.713764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:42.217 23:58:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:42.217 23:58:12 -- common/autotest_common.sh@850 -- # return 0 00:13:42.217 23:58:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:42.217 23:58:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:42.217 23:58:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.217 23:58:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.217 23:58:12 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.217 23:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.217 23:58:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.217 [2024-04-26 23:58:12.383437] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.217 23:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.217 23:58:12 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:42.217 23:58:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:42.217 23:58:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.217 23:58:12 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:42.217 23:58:12 -- target/host_management.sh@23 -- # cat 00:13:42.217 23:58:12 -- target/host_management.sh@30 -- # rpc_cmd 00:13:42.217 23:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.217 23:58:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.217 Malloc0 00:13:42.476 [2024-04-26 23:58:12.442863] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.476 23:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.476 23:58:12 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:42.476 23:58:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:42.476 23:58:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.476 23:58:12 -- target/host_management.sh@73 -- # perfpid=329855 00:13:42.476 23:58:12 -- target/host_management.sh@74 -- # waitforlisten 329855 /var/tmp/bdevperf.sock 00:13:42.476 23:58:12 -- common/autotest_common.sh@817 -- # '[' -z 329855 ']' 00:13:42.476 23:58:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.476 23:58:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:42.476 23:58:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.476 23:58:12 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:42.476 23:58:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:42.476 23:58:12 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:42.476 23:58:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.476 23:58:12 -- nvmf/common.sh@521 -- # config=() 00:13:42.476 23:58:12 -- nvmf/common.sh@521 -- # local subsystem config 00:13:42.476 23:58:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:42.476 23:58:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:42.476 { 00:13:42.476 "params": { 00:13:42.476 "name": "Nvme$subsystem", 00:13:42.476 "trtype": "$TEST_TRANSPORT", 00:13:42.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.476 "adrfam": "ipv4", 00:13:42.476 "trsvcid": "$NVMF_PORT", 00:13:42.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.477 "hdgst": ${hdgst:-false}, 00:13:42.477 "ddgst": ${ddgst:-false} 00:13:42.477 }, 00:13:42.477 "method": "bdev_nvme_attach_controller" 00:13:42.477 } 00:13:42.477 EOF 00:13:42.477 )") 00:13:42.477 23:58:12 -- nvmf/common.sh@543 -- # cat 00:13:42.477 23:58:12 -- nvmf/common.sh@545 -- # jq . 00:13:42.477 23:58:12 -- nvmf/common.sh@546 -- # IFS=, 00:13:42.477 23:58:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:42.477 "params": { 00:13:42.477 "name": "Nvme0", 00:13:42.477 "trtype": "tcp", 00:13:42.477 "traddr": "10.0.0.2", 00:13:42.477 "adrfam": "ipv4", 00:13:42.477 "trsvcid": "4420", 00:13:42.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:42.477 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:42.477 "hdgst": false, 00:13:42.477 "ddgst": false 00:13:42.477 }, 00:13:42.477 "method": "bdev_nvme_attach_controller" 00:13:42.477 }' 00:13:42.477 [2024-04-26 23:58:12.548407] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:13:42.477 [2024-04-26 23:58:12.548473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329855 ] 00:13:42.477 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.477 [2024-04-26 23:58:12.609102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.477 [2024-04-26 23:58:12.673254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.046 Running I/O for 10 seconds... 00:13:43.313 23:58:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:43.313 23:58:13 -- common/autotest_common.sh@850 -- # return 0 00:13:43.313 23:58:13 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:43.313 23:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.313 23:58:13 -- common/autotest_common.sh@10 -- # set +x 00:13:43.313 23:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.313 23:58:13 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:43.313 23:58:13 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:43.313 23:58:13 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:43.313 23:58:13 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:43.313 23:58:13 -- target/host_management.sh@52 -- # local ret=1 00:13:43.313 23:58:13 -- target/host_management.sh@53 -- # local i 00:13:43.313 23:58:13 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:43.313 23:58:13 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:43.313 23:58:13 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:43.313 23:58:13 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:43.313 23:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.313 23:58:13 -- common/autotest_common.sh@10 -- # set +x 00:13:43.313 23:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.313 23:58:13 -- target/host_management.sh@55 -- # read_io_count=515 00:13:43.313 23:58:13 -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:13:43.313 23:58:13 -- target/host_management.sh@59 -- # ret=0 00:13:43.313 23:58:13 -- target/host_management.sh@60 -- # break 00:13:43.313 23:58:13 -- target/host_management.sh@64 -- # return 0 00:13:43.313 23:58:13 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:43.313 23:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.313 23:58:13 -- common/autotest_common.sh@10 -- # set +x 00:13:43.313 [2024-04-26 23:58:13.405953] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406025] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406033] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406040] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406047] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406054] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406060] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406066] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406073] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406079] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406085] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406092] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406098] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406104] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406110] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406117] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406123] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406135] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406141] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406147] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406154] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406161] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406167] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406174] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406180] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406186] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406193] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406199] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406205] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406212] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406218] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406224] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406231] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406237] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406244] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406250] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406256] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406263] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406269] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406276] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406282] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406288] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406295] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406302] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406313] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406320] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406326] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406332] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406339] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406345] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406351] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406358] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406365] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.313 [2024-04-26 23:58:13.406371] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.314 [2024-04-26 23:58:13.406378] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.314 [2024-04-26 23:58:13.406384] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.314 [2024-04-26 23:58:13.406391] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.314 [2024-04-26 23:58:13.406397] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.314 [2024-04-26 23:58:13.406403] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.314 [2024-04-26 23:58:13.406410] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.314 [2024-04-26 23:58:13.406417] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.314 [2024-04-26 23:58:13.406424] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.314 [2024-04-26 23:58:13.406430] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d40 is same with the state(5) to be set 00:13:43.314 [2024-04-26 23:58:13.406614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.314 [2024-04-26 23:58:13.406651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.406663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.314 [2024-04-26 23:58:13.406671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.406679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.314 [2024-04-26 23:58:13.406687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.406696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:43.314 [2024-04-26 23:58:13.406708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.406716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75630 is same with the state(5) to be set 00:13:43.314 [2024-04-26 23:58:13.406988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.314 [2024-04-26 23:58:13.407376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.314 [2024-04-26 23:58:13.407386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.315 [2024-04-26 23:58:13.407952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.315 [2024-04-26 23:58:13.407963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.316 [2024-04-26 23:58:13.407971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.407981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.316 [2024-04-26 23:58:13.407989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.407999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.316 [2024-04-26 23:58:13.408007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.408017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.316 [2024-04-26 23:58:13.408025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.408035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.316 [2024-04-26 23:58:13.408043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.408053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.316 [2024-04-26 23:58:13.408061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.408071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.316 [2024-04-26 23:58:13.408079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.408089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.316 [2024-04-26 23:58:13.408097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.408107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.316 [2024-04-26 23:58:13.408116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.408128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.316 [2024-04-26 23:58:13.408136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.408146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.316 [2024-04-26 23:58:13.408154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.408163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6950 is same with the state(5) to be set 00:13:43.316 [2024-04-26 23:58:13.408205] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22a6950 was disconnected and freed. reset controller. 00:13:43.316 [2024-04-26 23:58:13.409419] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:43.316 23:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.316 task offset: 73728 on job bdev=Nvme0n1 fails 00:13:43.316 00:13:43.316 Latency(us) 00:13:43.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.316 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:43.316 Job: Nvme0n1 ended in about 0.43 seconds with error 00:13:43.316 Verification LBA range: start 0x0 length 0x400 00:13:43.316 Nvme0n1 : 0.43 1345.56 84.10 149.51 0.00 41552.28 6225.92 34734.08 00:13:43.316 =================================================================================================================== 00:13:43.316 Total : 1345.56 84.10 149.51 0.00 41552.28 6225.92 34734.08 00:13:43.316 23:58:13 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:43.316 [2024-04-26 23:58:13.411404] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:43.316 [2024-04-26 23:58:13.411427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e75630 (9): Bad file descriptor 00:13:43.316 23:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.316 23:58:13 -- common/autotest_common.sh@10 -- # set +x 00:13:43.316 [2024-04-26 23:58:13.415633] ctrlr.c: 780:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:43.316 [2024-04-26 23:58:13.415723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:43.316 [2024-04-26 23:58:13.415752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.316 [2024-04-26 23:58:13.415767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:43.316 [2024-04-26 23:58:13.415775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:43.316 [2024-04-26 23:58:13.415782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:43.316 [2024-04-26 23:58:13.415789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75630 00:13:43.316 [2024-04-26 23:58:13.415810] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e75630 (9): Bad file descriptor 00:13:43.316 [2024-04-26 23:58:13.415823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:43.316 [2024-04-26 23:58:13.415831] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:43.316 [2024-04-26 23:58:13.415847] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:43.316 [2024-04-26 23:58:13.415866] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:43.316 23:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.316 23:58:13 -- target/host_management.sh@87 -- # sleep 1 00:13:44.315 23:58:14 -- target/host_management.sh@91 -- # kill -9 329855 00:13:44.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (329855) - No such process 00:13:44.315 23:58:14 -- target/host_management.sh@91 -- # true 00:13:44.315 23:58:14 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:44.315 23:58:14 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:44.315 23:58:14 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:44.315 23:58:14 -- nvmf/common.sh@521 -- # config=() 00:13:44.315 23:58:14 -- nvmf/common.sh@521 -- # local subsystem config 00:13:44.315 23:58:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:44.315 23:58:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:44.315 { 00:13:44.315 "params": { 00:13:44.315 "name": "Nvme$subsystem", 00:13:44.315 "trtype": "$TEST_TRANSPORT", 00:13:44.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:44.315 "adrfam": "ipv4", 00:13:44.315 "trsvcid": "$NVMF_PORT", 00:13:44.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:44.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:44.315 "hdgst": ${hdgst:-false}, 00:13:44.315 "ddgst": ${ddgst:-false} 00:13:44.315 }, 00:13:44.315 "method": "bdev_nvme_attach_controller" 00:13:44.315 } 00:13:44.315 EOF 00:13:44.315 )") 00:13:44.315 23:58:14 -- nvmf/common.sh@543 -- # cat 00:13:44.315 23:58:14 -- nvmf/common.sh@545 -- # jq . 00:13:44.315 23:58:14 -- nvmf/common.sh@546 -- # IFS=, 00:13:44.315 23:58:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:44.315 "params": { 00:13:44.315 "name": "Nvme0", 00:13:44.315 "trtype": "tcp", 00:13:44.315 "traddr": "10.0.0.2", 00:13:44.315 "adrfam": "ipv4", 00:13:44.315 "trsvcid": "4420", 00:13:44.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:44.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:44.315 "hdgst": false, 00:13:44.315 "ddgst": false 00:13:44.315 }, 00:13:44.315 "method": "bdev_nvme_attach_controller" 00:13:44.315 }' 00:13:44.315 [2024-04-26 23:58:14.480015] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:13:44.315 [2024-04-26 23:58:14.480074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330314 ] 00:13:44.315 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.576 [2024-04-26 23:58:14.538722] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.576 [2024-04-26 23:58:14.603051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.836 Running I/O for 1 seconds... 00:13:45.779 00:13:45.779 Latency(us) 00:13:45.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.779 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:45.779 Verification LBA range: start 0x0 length 0x400 00:13:45.779 Nvme0n1 : 1.03 1495.12 93.45 0.00 0.00 42088.30 9120.43 34297.17 00:13:45.779 =================================================================================================================== 00:13:45.779 Total : 1495.12 93.45 0.00 0.00 42088.30 9120.43 34297.17 00:13:46.039 23:58:16 -- target/host_management.sh@102 -- # stoptarget 00:13:46.039 23:58:16 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:46.039 23:58:16 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:46.039 23:58:16 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:46.039 23:58:16 -- target/host_management.sh@40 -- # nvmftestfini 00:13:46.039 23:58:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:46.039 23:58:16 -- nvmf/common.sh@117 -- # sync 00:13:46.039 23:58:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.039 23:58:16 -- nvmf/common.sh@120 -- # set +e 00:13:46.039 23:58:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.039 23:58:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.039 rmmod nvme_tcp 00:13:46.039 rmmod nvme_fabrics 00:13:46.039 rmmod nvme_keyring 00:13:46.039 23:58:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.039 23:58:16 -- nvmf/common.sh@124 -- # set -e 00:13:46.039 23:58:16 -- nvmf/common.sh@125 -- # return 0 00:13:46.039 23:58:16 -- nvmf/common.sh@478 -- # '[' -n 329697 ']' 00:13:46.039 23:58:16 -- nvmf/common.sh@479 -- # killprocess 329697 00:13:46.039 23:58:16 -- common/autotest_common.sh@936 -- # '[' -z 329697 ']' 00:13:46.039 23:58:16 -- common/autotest_common.sh@940 -- # kill -0 329697 00:13:46.039 23:58:16 -- common/autotest_common.sh@941 -- # uname 00:13:46.039 23:58:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:46.039 23:58:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 329697 00:13:46.039 23:58:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:46.039 23:58:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:46.039 23:58:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 329697' 00:13:46.039 killing process with pid 329697 00:13:46.039 23:58:16 -- common/autotest_common.sh@955 -- # kill 329697 00:13:46.039 23:58:16 -- common/autotest_common.sh@960 -- # wait 329697 00:13:46.299 [2024-04-26 23:58:16.294051] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:46.299 23:58:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:46.299 23:58:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:46.299 23:58:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:46.299 23:58:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.299 23:58:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.299 23:58:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.299 23:58:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.299 23:58:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.213 23:58:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:48.213 00:13:48.213 real 0m6.878s 00:13:48.213 user 0m20.973s 00:13:48.213 sys 0m0.985s 00:13:48.213 23:58:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:48.213 23:58:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.213 ************************************ 00:13:48.213 END TEST nvmf_host_management 00:13:48.213 ************************************ 00:13:48.213 23:58:18 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:48.213 00:13:48.213 real 0m14.238s 00:13:48.213 user 0m23.003s 00:13:48.213 sys 0m6.226s 00:13:48.213 23:58:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:48.213 23:58:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.213 ************************************ 00:13:48.213 END TEST nvmf_host_management 00:13:48.213 ************************************ 00:13:48.475 23:58:18 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:48.475 23:58:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:48.475 23:58:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.475 23:58:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.475 ************************************ 00:13:48.475 START TEST nvmf_lvol 00:13:48.475 ************************************ 00:13:48.475 23:58:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:48.735 * Looking for test storage... 00:13:48.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.735 23:58:18 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.735 23:58:18 -- nvmf/common.sh@7 -- # uname -s 00:13:48.735 23:58:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.735 23:58:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.735 23:58:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.735 23:58:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.735 23:58:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.735 23:58:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.735 23:58:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.735 23:58:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.735 23:58:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.735 23:58:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.735 23:58:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:48.735 23:58:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:48.735 23:58:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.735 23:58:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.735 23:58:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.735 23:58:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.735 23:58:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.735 23:58:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.735 23:58:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.735 23:58:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.735 23:58:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.735 23:58:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.735 23:58:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.735 23:58:18 -- paths/export.sh@5 -- # export PATH 00:13:48.735 23:58:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.735 23:58:18 -- nvmf/common.sh@47 -- # : 0 00:13:48.735 23:58:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.735 23:58:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.735 23:58:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.735 23:58:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.735 23:58:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.735 23:58:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.735 23:58:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.735 23:58:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.735 23:58:18 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.735 23:58:18 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.735 23:58:18 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:48.735 23:58:18 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:48.735 23:58:18 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.735 23:58:18 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:48.735 23:58:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:48.735 23:58:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.735 23:58:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:48.735 23:58:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:48.735 23:58:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:48.735 23:58:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.735 23:58:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.735 23:58:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.735 23:58:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:48.735 23:58:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:48.735 23:58:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:48.735 23:58:18 -- common/autotest_common.sh@10 -- # set +x 00:13:55.323 23:58:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:55.323 23:58:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:55.323 23:58:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:55.323 23:58:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:55.323 23:58:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:55.323 23:58:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:55.323 23:58:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:55.323 23:58:25 -- nvmf/common.sh@295 -- # net_devs=() 00:13:55.323 23:58:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:55.323 23:58:25 -- nvmf/common.sh@296 -- # e810=() 00:13:55.323 23:58:25 -- nvmf/common.sh@296 -- # local -ga e810 00:13:55.323 23:58:25 -- nvmf/common.sh@297 -- # x722=() 00:13:55.323 23:58:25 -- nvmf/common.sh@297 -- # local -ga x722 00:13:55.323 23:58:25 -- nvmf/common.sh@298 -- # mlx=() 00:13:55.323 23:58:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:55.323 23:58:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.323 23:58:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.323 23:58:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.323 23:58:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.323 23:58:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.323 23:58:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.323 23:58:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.323 23:58:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.323 23:58:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.323 23:58:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.323 23:58:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.323 23:58:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:55.323 23:58:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:55.323 23:58:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:55.323 23:58:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.323 23:58:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:55.323 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:55.323 23:58:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.323 23:58:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:55.323 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:55.323 23:58:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:55.323 23:58:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.323 23:58:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.323 23:58:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:55.323 23:58:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.323 23:58:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:55.323 Found net devices under 0000:31:00.0: cvl_0_0 00:13:55.323 23:58:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.323 23:58:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.323 23:58:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.323 23:58:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:55.323 23:58:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.323 23:58:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:55.323 Found net devices under 0000:31:00.1: cvl_0_1 00:13:55.323 23:58:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.323 23:58:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:55.323 23:58:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:55.323 23:58:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:55.323 23:58:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:55.323 23:58:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.323 23:58:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.323 23:58:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.323 23:58:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:55.323 23:58:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.323 23:58:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.323 23:58:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:55.323 23:58:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.323 23:58:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.323 23:58:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:55.584 23:58:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:55.584 23:58:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.584 23:58:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.584 23:58:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.584 23:58:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.584 23:58:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:55.584 23:58:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.845 23:58:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.845 23:58:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.845 23:58:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:55.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:13:55.845 00:13:55.845 --- 10.0.0.2 ping statistics --- 00:13:55.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.845 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:13:55.845 23:58:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:13:55.845 00:13:55.845 --- 10.0.0.1 ping statistics --- 00:13:55.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.845 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:55.845 23:58:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.845 23:58:25 -- nvmf/common.sh@411 -- # return 0 00:13:55.845 23:58:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:55.845 23:58:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.845 23:58:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:55.845 23:58:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:55.845 23:58:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.845 23:58:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:55.845 23:58:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:55.845 23:58:25 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:55.845 23:58:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:55.845 23:58:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:55.845 23:58:25 -- common/autotest_common.sh@10 -- # set +x 00:13:55.845 23:58:25 -- nvmf/common.sh@470 -- # nvmfpid=334854 00:13:55.845 23:58:25 -- nvmf/common.sh@471 -- # waitforlisten 334854 00:13:55.845 23:58:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:55.845 23:58:25 -- common/autotest_common.sh@817 -- # '[' -z 334854 ']' 00:13:55.845 23:58:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.845 23:58:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:55.845 23:58:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.845 23:58:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:55.845 23:58:25 -- common/autotest_common.sh@10 -- # set +x 00:13:55.845 [2024-04-26 23:58:25.937480] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:13:55.845 [2024-04-26 23:58:25.937528] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.845 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.845 [2024-04-26 23:58:26.003246] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:56.107 [2024-04-26 23:58:26.066492] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.107 [2024-04-26 23:58:26.066530] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.107 [2024-04-26 23:58:26.066538] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.107 [2024-04-26 23:58:26.066544] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.107 [2024-04-26 23:58:26.066550] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.107 [2024-04-26 23:58:26.066655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.107 [2024-04-26 23:58:26.066769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.107 [2024-04-26 23:58:26.066773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.679 23:58:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:56.679 23:58:26 -- common/autotest_common.sh@850 -- # return 0 00:13:56.679 23:58:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:56.680 23:58:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:56.680 23:58:26 -- common/autotest_common.sh@10 -- # set +x 00:13:56.680 23:58:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.680 23:58:26 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:56.680 [2024-04-26 23:58:26.882871] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.941 23:58:26 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:56.941 23:58:27 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:56.941 23:58:27 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.201 23:58:27 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:57.201 23:58:27 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:57.462 23:58:27 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:57.462 23:58:27 -- target/nvmf_lvol.sh@29 -- # lvs=d655d2ed-18f8-416c-afd0-14c14a64efa2 00:13:57.462 23:58:27 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d655d2ed-18f8-416c-afd0-14c14a64efa2 lvol 20 00:13:57.722 23:58:27 -- target/nvmf_lvol.sh@32 -- # lvol=7e0a4e37-052b-482a-99df-56b85ecea2f3 00:13:57.722 23:58:27 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:57.984 23:58:27 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e0a4e37-052b-482a-99df-56b85ecea2f3 00:13:57.984 23:58:28 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:58.245 [2024-04-26 23:58:28.264464] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.245 23:58:28 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.245 23:58:28 -- target/nvmf_lvol.sh@42 -- # perf_pid=335471 00:13:58.245 23:58:28 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:58.245 23:58:28 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:58.504 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.447 23:58:29 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7e0a4e37-052b-482a-99df-56b85ecea2f3 MY_SNAPSHOT 00:13:59.447 23:58:29 -- target/nvmf_lvol.sh@47 -- # snapshot=0f19df41-c54c-4a6c-b893-2319edbe6f0e 00:13:59.447 23:58:29 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7e0a4e37-052b-482a-99df-56b85ecea2f3 30 00:13:59.707 23:58:29 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0f19df41-c54c-4a6c-b893-2319edbe6f0e MY_CLONE 00:13:59.966 23:58:30 -- target/nvmf_lvol.sh@49 -- # clone=ba2fe6bf-a3b7-478f-8667-9068cd3e101e 00:13:59.967 23:58:30 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ba2fe6bf-a3b7-478f-8667-9068cd3e101e 00:14:00.226 23:58:30 -- target/nvmf_lvol.sh@53 -- # wait 335471 00:14:10.217 Initializing NVMe Controllers 00:14:10.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:10.217 Controller IO queue size 128, less than required. 00:14:10.217 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:10.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:10.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:10.217 Initialization complete. Launching workers. 00:14:10.217 ======================================================== 00:14:10.217 Latency(us) 00:14:10.217 Device Information : IOPS MiB/s Average min max 00:14:10.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12567.90 49.09 10189.36 1574.02 43830.43 00:14:10.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12708.90 49.64 10072.58 3467.25 46612.29 00:14:10.217 ======================================================== 00:14:10.217 Total : 25276.80 98.74 10130.65 1574.02 46612.29 00:14:10.217 00:14:10.217 23:58:38 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:10.217 23:58:38 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e0a4e37-052b-482a-99df-56b85ecea2f3 00:14:10.217 23:58:39 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d655d2ed-18f8-416c-afd0-14c14a64efa2 00:14:10.217 23:58:39 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:10.217 23:58:39 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:10.217 23:58:39 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:10.217 23:58:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:10.217 23:58:39 -- nvmf/common.sh@117 -- # sync 00:14:10.217 23:58:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.217 23:58:39 -- nvmf/common.sh@120 -- # set +e 00:14:10.217 23:58:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.217 23:58:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.217 rmmod nvme_tcp 00:14:10.217 rmmod nvme_fabrics 00:14:10.217 rmmod nvme_keyring 00:14:10.217 23:58:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.217 23:58:39 -- nvmf/common.sh@124 -- # set -e 00:14:10.217 23:58:39 -- nvmf/common.sh@125 -- # return 0 00:14:10.217 23:58:39 -- nvmf/common.sh@478 -- # '[' -n 334854 ']' 00:14:10.217 23:58:39 -- nvmf/common.sh@479 -- # killprocess 334854 00:14:10.217 23:58:39 -- common/autotest_common.sh@936 -- # '[' -z 334854 ']' 00:14:10.217 23:58:39 -- common/autotest_common.sh@940 -- # kill -0 334854 00:14:10.217 23:58:39 -- common/autotest_common.sh@941 -- # uname 00:14:10.217 23:58:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:10.217 23:58:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 334854 00:14:10.217 23:58:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:10.217 23:58:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:10.217 23:58:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 334854' 00:14:10.217 killing process with pid 334854 00:14:10.218 23:58:39 -- common/autotest_common.sh@955 -- # kill 334854 00:14:10.218 23:58:39 -- common/autotest_common.sh@960 -- # wait 334854 00:14:10.218 23:58:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:10.218 23:58:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:10.218 23:58:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:10.218 23:58:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.218 23:58:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.218 23:58:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.218 23:58:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.218 23:58:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.603 23:58:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.603 00:14:11.603 real 0m23.049s 00:14:11.603 user 1m3.691s 00:14:11.603 sys 0m7.551s 00:14:11.603 23:58:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:11.603 23:58:41 -- common/autotest_common.sh@10 -- # set +x 00:14:11.603 ************************************ 00:14:11.603 END TEST nvmf_lvol 00:14:11.603 ************************************ 00:14:11.603 23:58:41 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:11.603 23:58:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:11.603 23:58:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:11.603 23:58:41 -- common/autotest_common.sh@10 -- # set +x 00:14:11.864 ************************************ 00:14:11.864 START TEST nvmf_lvs_grow 00:14:11.864 ************************************ 00:14:11.864 23:58:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:11.864 * Looking for test storage... 00:14:11.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.864 23:58:41 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.864 23:58:41 -- nvmf/common.sh@7 -- # uname -s 00:14:11.864 23:58:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.864 23:58:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.864 23:58:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.864 23:58:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.864 23:58:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.864 23:58:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.864 23:58:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.864 23:58:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.864 23:58:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.864 23:58:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.864 23:58:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:11.864 23:58:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:11.864 23:58:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.864 23:58:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.864 23:58:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.864 23:58:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.864 23:58:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.864 23:58:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.864 23:58:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.864 23:58:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.864 23:58:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.864 23:58:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.865 23:58:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.865 23:58:42 -- paths/export.sh@5 -- # export PATH 00:14:11.865 23:58:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.865 23:58:42 -- nvmf/common.sh@47 -- # : 0 00:14:11.865 23:58:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.865 23:58:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.865 23:58:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.865 23:58:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.865 23:58:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.865 23:58:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.865 23:58:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.865 23:58:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.865 23:58:42 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.865 23:58:42 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.865 23:58:42 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:11.865 23:58:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:11.865 23:58:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.865 23:58:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:11.865 23:58:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:11.865 23:58:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:11.865 23:58:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.865 23:58:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.865 23:58:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.865 23:58:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:11.865 23:58:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:11.865 23:58:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:11.865 23:58:42 -- common/autotest_common.sh@10 -- # set +x 00:14:20.002 23:58:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:20.002 23:58:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.002 23:58:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.002 23:58:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.002 23:58:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.002 23:58:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.002 23:58:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.002 23:58:48 -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.002 23:58:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.002 23:58:48 -- nvmf/common.sh@296 -- # e810=() 00:14:20.002 23:58:48 -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.002 23:58:48 -- nvmf/common.sh@297 -- # x722=() 00:14:20.002 23:58:48 -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.002 23:58:48 -- nvmf/common.sh@298 -- # mlx=() 00:14:20.002 23:58:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.002 23:58:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.002 23:58:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.002 23:58:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.002 23:58:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.002 23:58:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.002 23:58:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.002 23:58:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.002 23:58:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.002 23:58:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.002 23:58:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.002 23:58:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.002 23:58:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.002 23:58:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.002 23:58:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.002 23:58:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.003 23:58:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.003 23:58:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.003 23:58:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:20.003 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:20.003 23:58:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.003 23:58:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:20.003 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:20.003 23:58:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.003 23:58:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.003 23:58:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.003 23:58:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:20.003 23:58:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.003 23:58:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:20.003 Found net devices under 0000:31:00.0: cvl_0_0 00:14:20.003 23:58:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.003 23:58:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.003 23:58:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.003 23:58:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:20.003 23:58:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.003 23:58:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:20.003 Found net devices under 0000:31:00.1: cvl_0_1 00:14:20.003 23:58:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.003 23:58:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:20.003 23:58:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:20.003 23:58:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:20.003 23:58:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:20.003 23:58:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.003 23:58:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.003 23:58:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.003 23:58:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.003 23:58:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.003 23:58:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.003 23:58:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.003 23:58:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.003 23:58:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.003 23:58:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.003 23:58:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.003 23:58:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.003 23:58:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.003 23:58:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.003 23:58:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.003 23:58:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:20.003 23:58:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.003 23:58:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.003 23:58:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.003 23:58:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.784 ms 00:14:20.003 00:14:20.003 --- 10.0.0.2 ping statistics --- 00:14:20.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.003 rtt min/avg/max/mdev = 0.784/0.784/0.784/0.000 ms 00:14:20.003 23:58:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:14:20.003 00:14:20.003 --- 10.0.0.1 ping statistics --- 00:14:20.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.003 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:14:20.003 23:58:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.003 23:58:49 -- nvmf/common.sh@411 -- # return 0 00:14:20.003 23:58:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:20.003 23:58:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.003 23:58:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:20.003 23:58:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:20.003 23:58:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.003 23:58:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:20.003 23:58:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:20.003 23:58:49 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:20.003 23:58:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:20.003 23:58:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:20.003 23:58:49 -- common/autotest_common.sh@10 -- # set +x 00:14:20.003 23:58:49 -- nvmf/common.sh@470 -- # nvmfpid=341853 00:14:20.003 23:58:49 -- nvmf/common.sh@471 -- # waitforlisten 341853 00:14:20.003 23:58:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:20.003 23:58:49 -- common/autotest_common.sh@817 -- # '[' -z 341853 ']' 00:14:20.003 23:58:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.003 23:58:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:20.003 23:58:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.003 23:58:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:20.003 23:58:49 -- common/autotest_common.sh@10 -- # set +x 00:14:20.003 [2024-04-26 23:58:49.196809] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:14:20.003 [2024-04-26 23:58:49.196885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.003 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.003 [2024-04-26 23:58:49.267424] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.003 [2024-04-26 23:58:49.340981] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.003 [2024-04-26 23:58:49.341020] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.003 [2024-04-26 23:58:49.341031] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.003 [2024-04-26 23:58:49.341038] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.003 [2024-04-26 23:58:49.341044] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.003 [2024-04-26 23:58:49.341063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.003 23:58:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:20.003 23:58:49 -- common/autotest_common.sh@850 -- # return 0 00:14:20.003 23:58:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:20.003 23:58:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:20.003 23:58:49 -- common/autotest_common.sh@10 -- # set +x 00:14:20.003 23:58:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.003 23:58:50 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:20.003 [2024-04-26 23:58:50.144009] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.003 23:58:50 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:20.003 23:58:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:20.003 23:58:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:20.003 23:58:50 -- common/autotest_common.sh@10 -- # set +x 00:14:20.263 ************************************ 00:14:20.263 START TEST lvs_grow_clean 00:14:20.263 ************************************ 00:14:20.263 23:58:50 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:20.263 23:58:50 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:20.263 23:58:50 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:20.263 23:58:50 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:20.263 23:58:50 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:20.263 23:58:50 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:20.263 23:58:50 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:20.263 23:58:50 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:20.263 23:58:50 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:20.263 23:58:50 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:20.523 23:58:50 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:20.523 23:58:50 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:20.523 23:58:50 -- target/nvmf_lvs_grow.sh@28 -- # lvs=2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:20.523 23:58:50 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:20.523 23:58:50 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:20.782 23:58:50 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:20.782 23:58:50 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:20.782 23:58:50 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 lvol 150 00:14:20.782 23:58:50 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c6692d7e-1c0d-414c-9fa3-9e2fcf6fce2a 00:14:20.782 23:58:50 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:21.042 23:58:51 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:21.042 [2024-04-26 23:58:51.140840] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:21.042 [2024-04-26 23:58:51.140891] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:21.042 true 00:14:21.042 23:58:51 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:21.042 23:58:51 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:21.302 23:58:51 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:21.302 23:58:51 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:21.302 23:58:51 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c6692d7e-1c0d-414c-9fa3-9e2fcf6fce2a 00:14:21.562 23:58:51 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:21.562 [2024-04-26 23:58:51.750706] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.562 23:58:51 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.822 23:58:51 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=342358 00:14:21.822 23:58:51 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:21.822 23:58:51 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:21.822 23:58:51 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 342358 /var/tmp/bdevperf.sock 00:14:21.822 23:58:51 -- common/autotest_common.sh@817 -- # '[' -z 342358 ']' 00:14:21.822 23:58:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:21.822 23:58:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:21.822 23:58:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:21.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:21.822 23:58:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:21.822 23:58:51 -- common/autotest_common.sh@10 -- # set +x 00:14:21.822 [2024-04-26 23:58:51.965300] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:14:21.822 [2024-04-26 23:58:51.965349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342358 ] 00:14:21.822 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.822 [2024-04-26 23:58:52.023776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.083 [2024-04-26 23:58:52.088459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.656 23:58:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:22.656 23:58:52 -- common/autotest_common.sh@850 -- # return 0 00:14:22.656 23:58:52 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:22.917 Nvme0n1 00:14:22.917 23:58:53 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:23.178 [ 00:14:23.178 { 00:14:23.178 "name": "Nvme0n1", 00:14:23.178 "aliases": [ 00:14:23.178 "c6692d7e-1c0d-414c-9fa3-9e2fcf6fce2a" 00:14:23.178 ], 00:14:23.178 "product_name": "NVMe disk", 00:14:23.178 "block_size": 4096, 00:14:23.178 "num_blocks": 38912, 00:14:23.178 "uuid": "c6692d7e-1c0d-414c-9fa3-9e2fcf6fce2a", 00:14:23.178 "assigned_rate_limits": { 00:14:23.178 "rw_ios_per_sec": 0, 00:14:23.178 "rw_mbytes_per_sec": 0, 00:14:23.178 "r_mbytes_per_sec": 0, 00:14:23.178 "w_mbytes_per_sec": 0 00:14:23.178 }, 00:14:23.178 "claimed": false, 00:14:23.178 "zoned": false, 00:14:23.178 "supported_io_types": { 00:14:23.178 "read": true, 00:14:23.178 "write": true, 00:14:23.178 "unmap": true, 00:14:23.178 "write_zeroes": true, 00:14:23.178 "flush": true, 00:14:23.178 "reset": true, 00:14:23.178 "compare": true, 00:14:23.178 "compare_and_write": true, 00:14:23.178 "abort": true, 00:14:23.178 "nvme_admin": true, 00:14:23.178 "nvme_io": true 00:14:23.178 }, 00:14:23.178 "memory_domains": [ 00:14:23.178 { 00:14:23.178 "dma_device_id": "system", 00:14:23.178 "dma_device_type": 1 00:14:23.178 } 00:14:23.178 ], 00:14:23.178 "driver_specific": { 00:14:23.178 "nvme": [ 00:14:23.178 { 00:14:23.178 "trid": { 00:14:23.178 "trtype": "TCP", 00:14:23.178 "adrfam": "IPv4", 00:14:23.178 "traddr": "10.0.0.2", 00:14:23.178 "trsvcid": "4420", 00:14:23.178 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:23.178 }, 00:14:23.178 "ctrlr_data": { 00:14:23.178 "cntlid": 1, 00:14:23.178 "vendor_id": "0x8086", 00:14:23.178 "model_number": "SPDK bdev Controller", 00:14:23.178 "serial_number": "SPDK0", 00:14:23.178 "firmware_revision": "24.05", 00:14:23.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:23.178 "oacs": { 00:14:23.178 "security": 0, 00:14:23.178 "format": 0, 00:14:23.178 "firmware": 0, 00:14:23.178 "ns_manage": 0 00:14:23.178 }, 00:14:23.178 "multi_ctrlr": true, 00:14:23.178 "ana_reporting": false 00:14:23.178 }, 00:14:23.178 "vs": { 00:14:23.178 "nvme_version": "1.3" 00:14:23.178 }, 00:14:23.178 "ns_data": { 00:14:23.178 "id": 1, 00:14:23.178 "can_share": true 00:14:23.178 } 00:14:23.178 } 00:14:23.178 ], 00:14:23.178 "mp_policy": "active_passive" 00:14:23.178 } 00:14:23.178 } 00:14:23.178 ] 00:14:23.178 23:58:53 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=342693 00:14:23.178 23:58:53 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:23.178 23:58:53 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:23.178 Running I/O for 10 seconds... 00:14:24.119 Latency(us) 00:14:24.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.119 Nvme0n1 : 1.00 18606.00 72.68 0.00 0.00 0.00 0.00 0.00 00:14:24.119 =================================================================================================================== 00:14:24.119 Total : 18606.00 72.68 0.00 0.00 0.00 0.00 0.00 00:14:24.119 00:14:25.060 23:58:55 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:25.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.322 Nvme0n1 : 2.00 18763.00 73.29 0.00 0.00 0.00 0.00 0.00 00:14:25.322 =================================================================================================================== 00:14:25.322 Total : 18763.00 73.29 0.00 0.00 0.00 0.00 0.00 00:14:25.322 00:14:25.322 true 00:14:25.322 23:58:55 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:25.323 23:58:55 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:25.585 23:58:55 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:25.585 23:58:55 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:25.585 23:58:55 -- target/nvmf_lvs_grow.sh@65 -- # wait 342693 00:14:26.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.182 Nvme0n1 : 3.00 18814.00 73.49 0.00 0.00 0.00 0.00 0.00 00:14:26.182 =================================================================================================================== 00:14:26.182 Total : 18814.00 73.49 0.00 0.00 0.00 0.00 0.00 00:14:26.182 00:14:27.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.170 Nvme0n1 : 4.00 18842.75 73.60 0.00 0.00 0.00 0.00 0.00 00:14:27.170 =================================================================================================================== 00:14:27.170 Total : 18842.75 73.60 0.00 0.00 0.00 0.00 0.00 00:14:27.170 00:14:28.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.575 Nvme0n1 : 5.00 18873.00 73.72 0.00 0.00 0.00 0.00 0.00 00:14:28.575 =================================================================================================================== 00:14:28.575 Total : 18873.00 73.72 0.00 0.00 0.00 0.00 0.00 00:14:28.575 00:14:29.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.151 Nvme0n1 : 6.00 18891.83 73.80 0.00 0.00 0.00 0.00 0.00 00:14:29.151 =================================================================================================================== 00:14:29.151 Total : 18891.83 73.80 0.00 0.00 0.00 0.00 0.00 00:14:29.151 00:14:30.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.555 Nvme0n1 : 7.00 18905.43 73.85 0.00 0.00 0.00 0.00 0.00 00:14:30.555 =================================================================================================================== 00:14:30.555 Total : 18905.43 73.85 0.00 0.00 0.00 0.00 0.00 00:14:30.555 00:14:31.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.496 Nvme0n1 : 8.00 18916.50 73.89 0.00 0.00 0.00 0.00 0.00 00:14:31.496 =================================================================================================================== 00:14:31.496 Total : 18916.50 73.89 0.00 0.00 0.00 0.00 0.00 00:14:31.496 00:14:32.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.445 Nvme0n1 : 9.00 18925.89 73.93 0.00 0.00 0.00 0.00 0.00 00:14:32.445 =================================================================================================================== 00:14:32.445 Total : 18925.89 73.93 0.00 0.00 0.00 0.00 0.00 00:14:32.445 00:14:33.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.384 Nvme0n1 : 10.00 18933.60 73.96 0.00 0.00 0.00 0.00 0.00 00:14:33.384 =================================================================================================================== 00:14:33.384 Total : 18933.60 73.96 0.00 0.00 0.00 0.00 0.00 00:14:33.384 00:14:33.384 00:14:33.384 Latency(us) 00:14:33.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.384 Nvme0n1 : 10.01 18934.04 73.96 0.00 0.00 6755.76 4096.00 15291.73 00:14:33.384 =================================================================================================================== 00:14:33.384 Total : 18934.04 73.96 0.00 0.00 6755.76 4096.00 15291.73 00:14:33.384 0 00:14:33.384 23:59:03 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 342358 00:14:33.384 23:59:03 -- common/autotest_common.sh@936 -- # '[' -z 342358 ']' 00:14:33.384 23:59:03 -- common/autotest_common.sh@940 -- # kill -0 342358 00:14:33.384 23:59:03 -- common/autotest_common.sh@941 -- # uname 00:14:33.384 23:59:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:33.384 23:59:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 342358 00:14:33.384 23:59:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:33.384 23:59:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:33.384 23:59:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 342358' 00:14:33.384 killing process with pid 342358 00:14:33.384 23:59:03 -- common/autotest_common.sh@955 -- # kill 342358 00:14:33.384 Received shutdown signal, test time was about 10.000000 seconds 00:14:33.384 00:14:33.384 Latency(us) 00:14:33.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.384 =================================================================================================================== 00:14:33.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.384 23:59:03 -- common/autotest_common.sh@960 -- # wait 342358 00:14:33.384 23:59:03 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:33.644 23:59:03 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:33.644 23:59:03 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:33.904 23:59:03 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:33.904 23:59:03 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:33.904 23:59:03 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:33.904 [2024-04-26 23:59:04.026311] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:33.904 23:59:04 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:33.904 23:59:04 -- common/autotest_common.sh@638 -- # local es=0 00:14:33.904 23:59:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:33.904 23:59:04 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.904 23:59:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.904 23:59:04 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.904 23:59:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.904 23:59:04 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.904 23:59:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.904 23:59:04 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.904 23:59:04 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:33.904 23:59:04 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:34.165 request: 00:14:34.165 { 00:14:34.165 "uuid": "2bad6b83-17d6-405a-83b7-30d3ed925d94", 00:14:34.165 "method": "bdev_lvol_get_lvstores", 00:14:34.165 "req_id": 1 00:14:34.165 } 00:14:34.165 Got JSON-RPC error response 00:14:34.165 response: 00:14:34.165 { 00:14:34.165 "code": -19, 00:14:34.165 "message": "No such device" 00:14:34.165 } 00:14:34.165 23:59:04 -- common/autotest_common.sh@641 -- # es=1 00:14:34.165 23:59:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:34.165 23:59:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:34.165 23:59:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:34.165 23:59:04 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:34.165 aio_bdev 00:14:34.165 23:59:04 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c6692d7e-1c0d-414c-9fa3-9e2fcf6fce2a 00:14:34.165 23:59:04 -- common/autotest_common.sh@885 -- # local bdev_name=c6692d7e-1c0d-414c-9fa3-9e2fcf6fce2a 00:14:34.165 23:59:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:34.165 23:59:04 -- common/autotest_common.sh@887 -- # local i 00:14:34.165 23:59:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:34.165 23:59:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:34.165 23:59:04 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:34.425 23:59:04 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6692d7e-1c0d-414c-9fa3-9e2fcf6fce2a -t 2000 00:14:34.686 [ 00:14:34.686 { 00:14:34.686 "name": "c6692d7e-1c0d-414c-9fa3-9e2fcf6fce2a", 00:14:34.686 "aliases": [ 00:14:34.686 "lvs/lvol" 00:14:34.686 ], 00:14:34.686 "product_name": "Logical Volume", 00:14:34.686 "block_size": 4096, 00:14:34.686 "num_blocks": 38912, 00:14:34.686 "uuid": "c6692d7e-1c0d-414c-9fa3-9e2fcf6fce2a", 00:14:34.686 "assigned_rate_limits": { 00:14:34.686 "rw_ios_per_sec": 0, 00:14:34.686 "rw_mbytes_per_sec": 0, 00:14:34.686 "r_mbytes_per_sec": 0, 00:14:34.686 "w_mbytes_per_sec": 0 00:14:34.686 }, 00:14:34.686 "claimed": false, 00:14:34.686 "zoned": false, 00:14:34.686 "supported_io_types": { 00:14:34.686 "read": true, 00:14:34.686 "write": true, 00:14:34.686 "unmap": true, 00:14:34.686 "write_zeroes": true, 00:14:34.686 "flush": false, 00:14:34.686 "reset": true, 00:14:34.686 "compare": false, 00:14:34.686 "compare_and_write": false, 00:14:34.686 "abort": false, 00:14:34.686 "nvme_admin": false, 00:14:34.686 "nvme_io": false 00:14:34.686 }, 00:14:34.686 "driver_specific": { 00:14:34.686 "lvol": { 00:14:34.686 "lvol_store_uuid": "2bad6b83-17d6-405a-83b7-30d3ed925d94", 00:14:34.686 "base_bdev": "aio_bdev", 00:14:34.686 "thin_provision": false, 00:14:34.686 "snapshot": false, 00:14:34.686 "clone": false, 00:14:34.686 "esnap_clone": false 00:14:34.686 } 00:14:34.686 } 00:14:34.686 } 00:14:34.686 ] 00:14:34.686 23:59:04 -- common/autotest_common.sh@893 -- # return 0 00:14:34.686 23:59:04 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:34.686 23:59:04 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:34.686 23:59:04 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:34.686 23:59:04 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:34.686 23:59:04 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:34.946 23:59:04 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:34.946 23:59:04 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c6692d7e-1c0d-414c-9fa3-9e2fcf6fce2a 00:14:34.946 23:59:05 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2bad6b83-17d6-405a-83b7-30d3ed925d94 00:14:35.207 23:59:05 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:35.495 23:59:05 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:35.495 00:14:35.495 real 0m15.194s 00:14:35.495 user 0m15.002s 00:14:35.495 sys 0m1.203s 00:14:35.495 23:59:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:35.495 23:59:05 -- common/autotest_common.sh@10 -- # set +x 00:14:35.495 ************************************ 00:14:35.495 END TEST lvs_grow_clean 00:14:35.495 ************************************ 00:14:35.495 23:59:05 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:35.495 23:59:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:35.495 23:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.495 23:59:05 -- common/autotest_common.sh@10 -- # set +x 00:14:35.495 ************************************ 00:14:35.495 START TEST lvs_grow_dirty 00:14:35.495 ************************************ 00:14:35.495 23:59:05 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:14:35.495 23:59:05 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:35.495 23:59:05 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:35.495 23:59:05 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:35.495 23:59:05 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:35.495 23:59:05 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:35.495 23:59:05 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:35.495 23:59:05 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:35.495 23:59:05 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:35.495 23:59:05 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:35.755 23:59:05 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:35.755 23:59:05 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:36.016 23:59:06 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ab84554a-835e-40bd-90a2-189fc4309384 00:14:36.016 23:59:06 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:36.016 23:59:06 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:36.016 23:59:06 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:36.016 23:59:06 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:36.016 23:59:06 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ab84554a-835e-40bd-90a2-189fc4309384 lvol 150 00:14:36.359 23:59:06 -- target/nvmf_lvs_grow.sh@33 -- # lvol=cd67acbe-2c36-44e5-b6a8-2a64b5b88442 00:14:36.359 23:59:06 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:36.359 23:59:06 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:36.359 [2024-04-26 23:59:06.494358] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:36.359 [2024-04-26 23:59:06.494408] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:36.359 true 00:14:36.359 23:59:06 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:36.359 23:59:06 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:36.619 23:59:06 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:36.619 23:59:06 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:36.619 23:59:06 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cd67acbe-2c36-44e5-b6a8-2a64b5b88442 00:14:36.880 23:59:06 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:36.880 23:59:07 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:37.140 23:59:07 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=345449 00:14:37.140 23:59:07 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.140 23:59:07 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:37.140 23:59:07 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 345449 /var/tmp/bdevperf.sock 00:14:37.140 23:59:07 -- common/autotest_common.sh@817 -- # '[' -z 345449 ']' 00:14:37.140 23:59:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.140 23:59:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:37.140 23:59:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.140 23:59:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:37.140 23:59:07 -- common/autotest_common.sh@10 -- # set +x 00:14:37.140 [2024-04-26 23:59:07.273429] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:14:37.140 [2024-04-26 23:59:07.273478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345449 ] 00:14:37.140 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.140 [2024-04-26 23:59:07.331663] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.400 [2024-04-26 23:59:07.395256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.971 23:59:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:37.971 23:59:08 -- common/autotest_common.sh@850 -- # return 0 00:14:37.971 23:59:08 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:38.232 Nvme0n1 00:14:38.232 23:59:08 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:38.493 [ 00:14:38.493 { 00:14:38.493 "name": "Nvme0n1", 00:14:38.493 "aliases": [ 00:14:38.493 "cd67acbe-2c36-44e5-b6a8-2a64b5b88442" 00:14:38.493 ], 00:14:38.493 "product_name": "NVMe disk", 00:14:38.493 "block_size": 4096, 00:14:38.493 "num_blocks": 38912, 00:14:38.493 "uuid": "cd67acbe-2c36-44e5-b6a8-2a64b5b88442", 00:14:38.493 "assigned_rate_limits": { 00:14:38.493 "rw_ios_per_sec": 0, 00:14:38.493 "rw_mbytes_per_sec": 0, 00:14:38.493 "r_mbytes_per_sec": 0, 00:14:38.493 "w_mbytes_per_sec": 0 00:14:38.493 }, 00:14:38.493 "claimed": false, 00:14:38.493 "zoned": false, 00:14:38.493 "supported_io_types": { 00:14:38.493 "read": true, 00:14:38.493 "write": true, 00:14:38.493 "unmap": true, 00:14:38.493 "write_zeroes": true, 00:14:38.493 "flush": true, 00:14:38.493 "reset": true, 00:14:38.493 "compare": true, 00:14:38.493 "compare_and_write": true, 00:14:38.493 "abort": true, 00:14:38.493 "nvme_admin": true, 00:14:38.493 "nvme_io": true 00:14:38.493 }, 00:14:38.493 "memory_domains": [ 00:14:38.493 { 00:14:38.493 "dma_device_id": "system", 00:14:38.493 "dma_device_type": 1 00:14:38.493 } 00:14:38.493 ], 00:14:38.493 "driver_specific": { 00:14:38.493 "nvme": [ 00:14:38.493 { 00:14:38.493 "trid": { 00:14:38.493 "trtype": "TCP", 00:14:38.493 "adrfam": "IPv4", 00:14:38.493 "traddr": "10.0.0.2", 00:14:38.493 "trsvcid": "4420", 00:14:38.493 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:38.493 }, 00:14:38.493 "ctrlr_data": { 00:14:38.493 "cntlid": 1, 00:14:38.493 "vendor_id": "0x8086", 00:14:38.493 "model_number": "SPDK bdev Controller", 00:14:38.493 "serial_number": "SPDK0", 00:14:38.493 "firmware_revision": "24.05", 00:14:38.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:38.493 "oacs": { 00:14:38.493 "security": 0, 00:14:38.493 "format": 0, 00:14:38.493 "firmware": 0, 00:14:38.493 "ns_manage": 0 00:14:38.493 }, 00:14:38.493 "multi_ctrlr": true, 00:14:38.493 "ana_reporting": false 00:14:38.493 }, 00:14:38.493 "vs": { 00:14:38.493 "nvme_version": "1.3" 00:14:38.493 }, 00:14:38.493 "ns_data": { 00:14:38.493 "id": 1, 00:14:38.493 "can_share": true 00:14:38.493 } 00:14:38.493 } 00:14:38.493 ], 00:14:38.493 "mp_policy": "active_passive" 00:14:38.493 } 00:14:38.493 } 00:14:38.493 ] 00:14:38.493 23:59:08 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=345782 00:14:38.493 23:59:08 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:38.493 23:59:08 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:38.493 Running I/O for 10 seconds... 00:14:39.876 Latency(us) 00:14:39.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.876 Nvme0n1 : 1.00 18632.00 72.78 0.00 0.00 0.00 0.00 0.00 00:14:39.876 =================================================================================================================== 00:14:39.876 Total : 18632.00 72.78 0.00 0.00 0.00 0.00 0.00 00:14:39.876 00:14:40.447 23:59:10 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:40.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.447 Nvme0n1 : 2.00 18721.50 73.13 0.00 0.00 0.00 0.00 0.00 00:14:40.447 =================================================================================================================== 00:14:40.447 Total : 18721.50 73.13 0.00 0.00 0.00 0.00 0.00 00:14:40.447 00:14:40.707 true 00:14:40.707 23:59:10 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:40.707 23:59:10 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:40.707 23:59:10 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:40.707 23:59:10 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:40.707 23:59:10 -- target/nvmf_lvs_grow.sh@65 -- # wait 345782 00:14:41.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.650 Nvme0n1 : 3.00 18771.33 73.33 0.00 0.00 0.00 0.00 0.00 00:14:41.650 =================================================================================================================== 00:14:41.650 Total : 18771.33 73.33 0.00 0.00 0.00 0.00 0.00 00:14:41.650 00:14:42.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.590 Nvme0n1 : 4.00 18812.25 73.49 0.00 0.00 0.00 0.00 0.00 00:14:42.590 =================================================================================================================== 00:14:42.590 Total : 18812.25 73.49 0.00 0.00 0.00 0.00 0.00 00:14:42.590 00:14:43.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.531 Nvme0n1 : 5.00 18837.80 73.59 0.00 0.00 0.00 0.00 0.00 00:14:43.531 =================================================================================================================== 00:14:43.531 Total : 18837.80 73.59 0.00 0.00 0.00 0.00 0.00 00:14:43.531 00:14:44.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.474 Nvme0n1 : 6.00 18854.33 73.65 0.00 0.00 0.00 0.00 0.00 00:14:44.474 =================================================================================================================== 00:14:44.474 Total : 18854.33 73.65 0.00 0.00 0.00 0.00 0.00 00:14:44.474 00:14:45.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.862 Nvme0n1 : 7.00 18874.29 73.73 0.00 0.00 0.00 0.00 0.00 00:14:45.862 =================================================================================================================== 00:14:45.862 Total : 18874.29 73.73 0.00 0.00 0.00 0.00 0.00 00:14:45.862 00:14:46.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.807 Nvme0n1 : 8.00 18889.75 73.79 0.00 0.00 0.00 0.00 0.00 00:14:46.807 =================================================================================================================== 00:14:46.807 Total : 18889.75 73.79 0.00 0.00 0.00 0.00 0.00 00:14:46.807 00:14:47.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.751 Nvme0n1 : 9.00 18901.89 73.84 0.00 0.00 0.00 0.00 0.00 00:14:47.751 =================================================================================================================== 00:14:47.751 Total : 18901.89 73.84 0.00 0.00 0.00 0.00 0.00 00:14:47.751 00:14:48.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.694 Nvme0n1 : 10.00 18912.10 73.88 0.00 0.00 0.00 0.00 0.00 00:14:48.694 =================================================================================================================== 00:14:48.694 Total : 18912.10 73.88 0.00 0.00 0.00 0.00 0.00 00:14:48.694 00:14:48.694 00:14:48.694 Latency(us) 00:14:48.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.694 Nvme0n1 : 10.01 18908.49 73.86 0.00 0.00 6764.39 3822.93 11905.71 00:14:48.694 =================================================================================================================== 00:14:48.694 Total : 18908.49 73.86 0.00 0.00 6764.39 3822.93 11905.71 00:14:48.694 0 00:14:48.694 23:59:18 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 345449 00:14:48.694 23:59:18 -- common/autotest_common.sh@936 -- # '[' -z 345449 ']' 00:14:48.694 23:59:18 -- common/autotest_common.sh@940 -- # kill -0 345449 00:14:48.694 23:59:18 -- common/autotest_common.sh@941 -- # uname 00:14:48.694 23:59:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.694 23:59:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 345449 00:14:48.694 23:59:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:48.694 23:59:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:48.694 23:59:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 345449' 00:14:48.694 killing process with pid 345449 00:14:48.694 23:59:18 -- common/autotest_common.sh@955 -- # kill 345449 00:14:48.694 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.694 00:14:48.694 Latency(us) 00:14:48.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.694 =================================================================================================================== 00:14:48.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.694 23:59:18 -- common/autotest_common.sh@960 -- # wait 345449 00:14:48.694 23:59:18 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:48.954 23:59:19 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:48.954 23:59:19 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:49.216 23:59:19 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:49.216 23:59:19 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:49.216 23:59:19 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 341853 00:14:49.216 23:59:19 -- target/nvmf_lvs_grow.sh@74 -- # wait 341853 00:14:49.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 341853 Killed "${NVMF_APP[@]}" "$@" 00:14:49.216 23:59:19 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:49.216 23:59:19 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:49.216 23:59:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:49.216 23:59:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:49.216 23:59:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.216 23:59:19 -- nvmf/common.sh@470 -- # nvmfpid=347807 00:14:49.216 23:59:19 -- nvmf/common.sh@471 -- # waitforlisten 347807 00:14:49.216 23:59:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:49.216 23:59:19 -- common/autotest_common.sh@817 -- # '[' -z 347807 ']' 00:14:49.216 23:59:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.216 23:59:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:49.216 23:59:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.216 23:59:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:49.216 23:59:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.216 [2024-04-26 23:59:19.313149] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:14:49.216 [2024-04-26 23:59:19.313204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.216 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.216 [2024-04-26 23:59:19.380308] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.477 [2024-04-26 23:59:19.445598] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.477 [2024-04-26 23:59:19.445634] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.477 [2024-04-26 23:59:19.445642] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.477 [2024-04-26 23:59:19.445652] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.477 [2024-04-26 23:59:19.445658] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.477 [2024-04-26 23:59:19.445685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.048 23:59:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:50.048 23:59:20 -- common/autotest_common.sh@850 -- # return 0 00:14:50.048 23:59:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:50.048 23:59:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:50.048 23:59:20 -- common/autotest_common.sh@10 -- # set +x 00:14:50.048 23:59:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.048 23:59:20 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:50.048 [2024-04-26 23:59:20.254625] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:50.048 [2024-04-26 23:59:20.254712] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:50.048 [2024-04-26 23:59:20.254740] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:50.309 23:59:20 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:50.309 23:59:20 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev cd67acbe-2c36-44e5-b6a8-2a64b5b88442 00:14:50.309 23:59:20 -- common/autotest_common.sh@885 -- # local bdev_name=cd67acbe-2c36-44e5-b6a8-2a64b5b88442 00:14:50.309 23:59:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:50.309 23:59:20 -- common/autotest_common.sh@887 -- # local i 00:14:50.309 23:59:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:50.309 23:59:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:50.309 23:59:20 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:50.309 23:59:20 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cd67acbe-2c36-44e5-b6a8-2a64b5b88442 -t 2000 00:14:50.569 [ 00:14:50.569 { 00:14:50.569 "name": "cd67acbe-2c36-44e5-b6a8-2a64b5b88442", 00:14:50.569 "aliases": [ 00:14:50.569 "lvs/lvol" 00:14:50.569 ], 00:14:50.569 "product_name": "Logical Volume", 00:14:50.569 "block_size": 4096, 00:14:50.569 "num_blocks": 38912, 00:14:50.569 "uuid": "cd67acbe-2c36-44e5-b6a8-2a64b5b88442", 00:14:50.569 "assigned_rate_limits": { 00:14:50.569 "rw_ios_per_sec": 0, 00:14:50.569 "rw_mbytes_per_sec": 0, 00:14:50.569 "r_mbytes_per_sec": 0, 00:14:50.569 "w_mbytes_per_sec": 0 00:14:50.569 }, 00:14:50.569 "claimed": false, 00:14:50.569 "zoned": false, 00:14:50.569 "supported_io_types": { 00:14:50.569 "read": true, 00:14:50.569 "write": true, 00:14:50.569 "unmap": true, 00:14:50.569 "write_zeroes": true, 00:14:50.569 "flush": false, 00:14:50.569 "reset": true, 00:14:50.569 "compare": false, 00:14:50.569 "compare_and_write": false, 00:14:50.569 "abort": false, 00:14:50.569 "nvme_admin": false, 00:14:50.569 "nvme_io": false 00:14:50.569 }, 00:14:50.569 "driver_specific": { 00:14:50.569 "lvol": { 00:14:50.569 "lvol_store_uuid": "ab84554a-835e-40bd-90a2-189fc4309384", 00:14:50.569 "base_bdev": "aio_bdev", 00:14:50.569 "thin_provision": false, 00:14:50.569 "snapshot": false, 00:14:50.569 "clone": false, 00:14:50.569 "esnap_clone": false 00:14:50.569 } 00:14:50.569 } 00:14:50.569 } 00:14:50.569 ] 00:14:50.569 23:59:20 -- common/autotest_common.sh@893 -- # return 0 00:14:50.569 23:59:20 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:50.569 23:59:20 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:50.569 23:59:20 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:50.569 23:59:20 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:50.569 23:59:20 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:50.830 23:59:20 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:50.830 23:59:20 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:50.830 [2024-04-26 23:59:21.030554] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:51.090 23:59:21 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:51.090 23:59:21 -- common/autotest_common.sh@638 -- # local es=0 00:14:51.090 23:59:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:51.090 23:59:21 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.090 23:59:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:51.090 23:59:21 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.090 23:59:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:51.090 23:59:21 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.090 23:59:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:51.090 23:59:21 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.090 23:59:21 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:51.090 23:59:21 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:51.090 request: 00:14:51.090 { 00:14:51.090 "uuid": "ab84554a-835e-40bd-90a2-189fc4309384", 00:14:51.090 "method": "bdev_lvol_get_lvstores", 00:14:51.090 "req_id": 1 00:14:51.090 } 00:14:51.090 Got JSON-RPC error response 00:14:51.090 response: 00:14:51.090 { 00:14:51.090 "code": -19, 00:14:51.090 "message": "No such device" 00:14:51.090 } 00:14:51.090 23:59:21 -- common/autotest_common.sh@641 -- # es=1 00:14:51.090 23:59:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:51.090 23:59:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:51.090 23:59:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:51.090 23:59:21 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:51.350 aio_bdev 00:14:51.350 23:59:21 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev cd67acbe-2c36-44e5-b6a8-2a64b5b88442 00:14:51.350 23:59:21 -- common/autotest_common.sh@885 -- # local bdev_name=cd67acbe-2c36-44e5-b6a8-2a64b5b88442 00:14:51.350 23:59:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:51.350 23:59:21 -- common/autotest_common.sh@887 -- # local i 00:14:51.350 23:59:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:51.350 23:59:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:51.350 23:59:21 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:51.350 23:59:21 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cd67acbe-2c36-44e5-b6a8-2a64b5b88442 -t 2000 00:14:51.611 [ 00:14:51.611 { 00:14:51.611 "name": "cd67acbe-2c36-44e5-b6a8-2a64b5b88442", 00:14:51.611 "aliases": [ 00:14:51.611 "lvs/lvol" 00:14:51.611 ], 00:14:51.611 "product_name": "Logical Volume", 00:14:51.611 "block_size": 4096, 00:14:51.611 "num_blocks": 38912, 00:14:51.611 "uuid": "cd67acbe-2c36-44e5-b6a8-2a64b5b88442", 00:14:51.611 "assigned_rate_limits": { 00:14:51.611 "rw_ios_per_sec": 0, 00:14:51.611 "rw_mbytes_per_sec": 0, 00:14:51.611 "r_mbytes_per_sec": 0, 00:14:51.611 "w_mbytes_per_sec": 0 00:14:51.611 }, 00:14:51.611 "claimed": false, 00:14:51.611 "zoned": false, 00:14:51.611 "supported_io_types": { 00:14:51.611 "read": true, 00:14:51.611 "write": true, 00:14:51.611 "unmap": true, 00:14:51.611 "write_zeroes": true, 00:14:51.611 "flush": false, 00:14:51.611 "reset": true, 00:14:51.611 "compare": false, 00:14:51.611 "compare_and_write": false, 00:14:51.611 "abort": false, 00:14:51.611 "nvme_admin": false, 00:14:51.611 "nvme_io": false 00:14:51.611 }, 00:14:51.611 "driver_specific": { 00:14:51.611 "lvol": { 00:14:51.611 "lvol_store_uuid": "ab84554a-835e-40bd-90a2-189fc4309384", 00:14:51.611 "base_bdev": "aio_bdev", 00:14:51.611 "thin_provision": false, 00:14:51.611 "snapshot": false, 00:14:51.611 "clone": false, 00:14:51.611 "esnap_clone": false 00:14:51.611 } 00:14:51.611 } 00:14:51.611 } 00:14:51.611 ] 00:14:51.611 23:59:21 -- common/autotest_common.sh@893 -- # return 0 00:14:51.611 23:59:21 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:51.611 23:59:21 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:51.872 23:59:21 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:51.872 23:59:21 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:51.872 23:59:21 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:51.872 23:59:22 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:51.872 23:59:22 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cd67acbe-2c36-44e5-b6a8-2a64b5b88442 00:14:52.132 23:59:22 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab84554a-835e-40bd-90a2-189fc4309384 00:14:52.392 23:59:22 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:52.392 23:59:22 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:52.392 00:14:52.392 real 0m16.865s 00:14:52.392 user 0m44.449s 00:14:52.392 sys 0m2.793s 00:14:52.392 23:59:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:52.392 23:59:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.392 ************************************ 00:14:52.392 END TEST lvs_grow_dirty 00:14:52.392 ************************************ 00:14:52.392 23:59:22 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:52.392 23:59:22 -- common/autotest_common.sh@794 -- # type=--id 00:14:52.392 23:59:22 -- common/autotest_common.sh@795 -- # id=0 00:14:52.392 23:59:22 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:52.392 23:59:22 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:52.392 23:59:22 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:52.392 23:59:22 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:52.392 23:59:22 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:52.392 23:59:22 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:52.392 nvmf_trace.0 00:14:52.392 23:59:22 -- common/autotest_common.sh@809 -- # return 0 00:14:52.392 23:59:22 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:52.392 23:59:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:52.653 23:59:22 -- nvmf/common.sh@117 -- # sync 00:14:52.653 23:59:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:52.653 23:59:22 -- nvmf/common.sh@120 -- # set +e 00:14:52.653 23:59:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:52.653 23:59:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:52.653 rmmod nvme_tcp 00:14:52.653 rmmod nvme_fabrics 00:14:52.653 rmmod nvme_keyring 00:14:52.653 23:59:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:52.653 23:59:22 -- nvmf/common.sh@124 -- # set -e 00:14:52.653 23:59:22 -- nvmf/common.sh@125 -- # return 0 00:14:52.653 23:59:22 -- nvmf/common.sh@478 -- # '[' -n 347807 ']' 00:14:52.653 23:59:22 -- nvmf/common.sh@479 -- # killprocess 347807 00:14:52.653 23:59:22 -- common/autotest_common.sh@936 -- # '[' -z 347807 ']' 00:14:52.653 23:59:22 -- common/autotest_common.sh@940 -- # kill -0 347807 00:14:52.653 23:59:22 -- common/autotest_common.sh@941 -- # uname 00:14:52.653 23:59:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:52.653 23:59:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 347807 00:14:52.653 23:59:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:52.653 23:59:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:52.653 23:59:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 347807' 00:14:52.653 killing process with pid 347807 00:14:52.653 23:59:22 -- common/autotest_common.sh@955 -- # kill 347807 00:14:52.653 23:59:22 -- common/autotest_common.sh@960 -- # wait 347807 00:14:52.653 23:59:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:52.653 23:59:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:52.653 23:59:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:52.653 23:59:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.653 23:59:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:52.653 23:59:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.653 23:59:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.653 23:59:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.199 23:59:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:55.199 00:14:55.199 real 0m43.067s 00:14:55.199 user 1m5.465s 00:14:55.199 sys 0m9.745s 00:14:55.199 23:59:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:55.199 23:59:24 -- common/autotest_common.sh@10 -- # set +x 00:14:55.199 ************************************ 00:14:55.199 END TEST nvmf_lvs_grow 00:14:55.199 ************************************ 00:14:55.199 23:59:24 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:55.199 23:59:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:55.199 23:59:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:55.199 23:59:24 -- common/autotest_common.sh@10 -- # set +x 00:14:55.199 ************************************ 00:14:55.199 START TEST nvmf_bdev_io_wait 00:14:55.199 ************************************ 00:14:55.199 23:59:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:55.199 * Looking for test storage... 00:14:55.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.199 23:59:25 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.199 23:59:25 -- nvmf/common.sh@7 -- # uname -s 00:14:55.199 23:59:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.199 23:59:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.199 23:59:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.199 23:59:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.199 23:59:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.199 23:59:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.199 23:59:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.199 23:59:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.199 23:59:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.199 23:59:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.199 23:59:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:55.199 23:59:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:55.199 23:59:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.199 23:59:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.199 23:59:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.199 23:59:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.199 23:59:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.199 23:59:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.199 23:59:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.199 23:59:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.200 23:59:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.200 23:59:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.200 23:59:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.200 23:59:25 -- paths/export.sh@5 -- # export PATH 00:14:55.200 23:59:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.200 23:59:25 -- nvmf/common.sh@47 -- # : 0 00:14:55.200 23:59:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.200 23:59:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.200 23:59:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.200 23:59:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.200 23:59:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.200 23:59:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.200 23:59:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.200 23:59:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.200 23:59:25 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.200 23:59:25 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.200 23:59:25 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:55.200 23:59:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:55.200 23:59:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.200 23:59:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:55.200 23:59:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:55.200 23:59:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:55.200 23:59:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.200 23:59:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.200 23:59:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.200 23:59:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:55.200 23:59:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:55.200 23:59:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.200 23:59:25 -- common/autotest_common.sh@10 -- # set +x 00:15:01.917 23:59:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:01.917 23:59:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.917 23:59:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.917 23:59:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.917 23:59:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.917 23:59:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.917 23:59:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.917 23:59:31 -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.917 23:59:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.917 23:59:31 -- nvmf/common.sh@296 -- # e810=() 00:15:01.917 23:59:31 -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.917 23:59:31 -- nvmf/common.sh@297 -- # x722=() 00:15:01.917 23:59:31 -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.917 23:59:31 -- nvmf/common.sh@298 -- # mlx=() 00:15:01.917 23:59:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.917 23:59:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.917 23:59:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.917 23:59:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.917 23:59:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.917 23:59:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.917 23:59:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.917 23:59:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.917 23:59:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.917 23:59:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.917 23:59:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.917 23:59:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.917 23:59:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.917 23:59:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:01.917 23:59:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.917 23:59:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.917 23:59:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:01.917 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:01.917 23:59:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.917 23:59:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:01.917 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:01.917 23:59:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.917 23:59:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.917 23:59:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.917 23:59:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:01.917 23:59:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.917 23:59:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:01.917 Found net devices under 0000:31:00.0: cvl_0_0 00:15:01.917 23:59:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.917 23:59:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.917 23:59:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.917 23:59:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:01.917 23:59:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.917 23:59:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:01.917 Found net devices under 0000:31:00.1: cvl_0_1 00:15:01.917 23:59:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.917 23:59:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:01.917 23:59:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:01.917 23:59:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:01.917 23:59:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:01.917 23:59:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.917 23:59:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.917 23:59:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.917 23:59:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.917 23:59:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.917 23:59:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.917 23:59:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.917 23:59:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.917 23:59:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.917 23:59:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.917 23:59:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.917 23:59:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.917 23:59:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.179 23:59:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.179 23:59:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.179 23:59:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:02.179 23:59:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.179 23:59:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.179 23:59:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.179 23:59:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:02.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.733 ms 00:15:02.179 00:15:02.179 --- 10.0.0.2 ping statistics --- 00:15:02.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.179 rtt min/avg/max/mdev = 0.733/0.733/0.733/0.000 ms 00:15:02.179 23:59:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:15:02.179 00:15:02.179 --- 10.0.0.1 ping statistics --- 00:15:02.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.179 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:15:02.179 23:59:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.179 23:59:32 -- nvmf/common.sh@411 -- # return 0 00:15:02.179 23:59:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:02.179 23:59:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.179 23:59:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:02.179 23:59:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:02.179 23:59:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.179 23:59:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:02.179 23:59:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:02.179 23:59:32 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:02.179 23:59:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:02.179 23:59:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:02.179 23:59:32 -- common/autotest_common.sh@10 -- # set +x 00:15:02.179 23:59:32 -- nvmf/common.sh@470 -- # nvmfpid=352771 00:15:02.179 23:59:32 -- nvmf/common.sh@471 -- # waitforlisten 352771 00:15:02.179 23:59:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:02.180 23:59:32 -- common/autotest_common.sh@817 -- # '[' -z 352771 ']' 00:15:02.180 23:59:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.180 23:59:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:02.180 23:59:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.180 23:59:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:02.180 23:59:32 -- common/autotest_common.sh@10 -- # set +x 00:15:02.441 [2024-04-26 23:59:32.420195] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:15:02.441 [2024-04-26 23:59:32.420258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.441 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.441 [2024-04-26 23:59:32.491469] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.441 [2024-04-26 23:59:32.566654] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.441 [2024-04-26 23:59:32.566696] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.441 [2024-04-26 23:59:32.566704] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.441 [2024-04-26 23:59:32.566711] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.441 [2024-04-26 23:59:32.566717] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.441 [2024-04-26 23:59:32.566761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.441 [2024-04-26 23:59:32.566904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.441 [2024-04-26 23:59:32.567189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.441 [2024-04-26 23:59:32.567190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.013 23:59:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:03.013 23:59:33 -- common/autotest_common.sh@850 -- # return 0 00:15:03.013 23:59:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:03.013 23:59:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:03.013 23:59:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.275 23:59:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:03.275 23:59:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.275 23:59:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.275 23:59:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:03.275 23:59:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.275 23:59:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.275 23:59:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.275 23:59:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.275 23:59:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.275 [2024-04-26 23:59:33.303358] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.275 23:59:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:03.275 23:59:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.275 23:59:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.275 Malloc0 00:15:03.275 23:59:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:03.275 23:59:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.275 23:59:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.275 23:59:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:03.275 23:59:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.275 23:59:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.275 23:59:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.275 23:59:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.275 23:59:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.275 [2024-04-26 23:59:33.369099] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.275 23:59:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=352972 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@30 -- # READ_PID=352974 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:03.275 23:59:33 -- nvmf/common.sh@521 -- # config=() 00:15:03.275 23:59:33 -- nvmf/common.sh@521 -- # local subsystem config 00:15:03.275 23:59:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:03.275 23:59:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:03.275 { 00:15:03.275 "params": { 00:15:03.275 "name": "Nvme$subsystem", 00:15:03.275 "trtype": "$TEST_TRANSPORT", 00:15:03.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.275 "adrfam": "ipv4", 00:15:03.275 "trsvcid": "$NVMF_PORT", 00:15:03.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.275 "hdgst": ${hdgst:-false}, 00:15:03.275 "ddgst": ${ddgst:-false} 00:15:03.275 }, 00:15:03.275 "method": "bdev_nvme_attach_controller" 00:15:03.275 } 00:15:03.275 EOF 00:15:03.275 )") 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=352976 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:03.275 23:59:33 -- nvmf/common.sh@521 -- # config=() 00:15:03.275 23:59:33 -- nvmf/common.sh@521 -- # local subsystem config 00:15:03.275 23:59:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:03.275 23:59:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:03.275 { 00:15:03.275 "params": { 00:15:03.275 "name": "Nvme$subsystem", 00:15:03.275 "trtype": "$TEST_TRANSPORT", 00:15:03.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.275 "adrfam": "ipv4", 00:15:03.275 "trsvcid": "$NVMF_PORT", 00:15:03.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.275 "hdgst": ${hdgst:-false}, 00:15:03.275 "ddgst": ${ddgst:-false} 00:15:03.275 }, 00:15:03.275 "method": "bdev_nvme_attach_controller" 00:15:03.275 } 00:15:03.275 EOF 00:15:03.275 )") 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=352979 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@35 -- # sync 00:15:03.275 23:59:33 -- nvmf/common.sh@543 -- # cat 00:15:03.275 23:59:33 -- nvmf/common.sh@521 -- # config=() 00:15:03.275 23:59:33 -- nvmf/common.sh@521 -- # local subsystem config 00:15:03.275 23:59:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:03.275 23:59:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:03.275 { 00:15:03.275 "params": { 00:15:03.275 "name": "Nvme$subsystem", 00:15:03.275 "trtype": "$TEST_TRANSPORT", 00:15:03.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.275 "adrfam": "ipv4", 00:15:03.275 "trsvcid": "$NVMF_PORT", 00:15:03.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.275 "hdgst": ${hdgst:-false}, 00:15:03.275 "ddgst": ${ddgst:-false} 00:15:03.275 }, 00:15:03.275 "method": "bdev_nvme_attach_controller" 00:15:03.275 } 00:15:03.275 EOF 00:15:03.275 )") 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:03.275 23:59:33 -- nvmf/common.sh@521 -- # config=() 00:15:03.275 23:59:33 -- nvmf/common.sh@543 -- # cat 00:15:03.275 23:59:33 -- nvmf/common.sh@521 -- # local subsystem config 00:15:03.275 23:59:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:03.275 23:59:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:03.275 { 00:15:03.275 "params": { 00:15:03.275 "name": "Nvme$subsystem", 00:15:03.275 "trtype": "$TEST_TRANSPORT", 00:15:03.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.275 "adrfam": "ipv4", 00:15:03.275 "trsvcid": "$NVMF_PORT", 00:15:03.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.275 "hdgst": ${hdgst:-false}, 00:15:03.275 "ddgst": ${ddgst:-false} 00:15:03.275 }, 00:15:03.275 "method": "bdev_nvme_attach_controller" 00:15:03.275 } 00:15:03.275 EOF 00:15:03.275 )") 00:15:03.275 23:59:33 -- nvmf/common.sh@543 -- # cat 00:15:03.275 23:59:33 -- target/bdev_io_wait.sh@37 -- # wait 352972 00:15:03.275 23:59:33 -- nvmf/common.sh@543 -- # cat 00:15:03.275 23:59:33 -- nvmf/common.sh@545 -- # jq . 00:15:03.275 23:59:33 -- nvmf/common.sh@545 -- # jq . 00:15:03.275 23:59:33 -- nvmf/common.sh@545 -- # jq . 00:15:03.275 23:59:33 -- nvmf/common.sh@546 -- # IFS=, 00:15:03.275 23:59:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:03.275 "params": { 00:15:03.275 "name": "Nvme1", 00:15:03.275 "trtype": "tcp", 00:15:03.275 "traddr": "10.0.0.2", 00:15:03.275 "adrfam": "ipv4", 00:15:03.275 "trsvcid": "4420", 00:15:03.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.275 "hdgst": false, 00:15:03.275 "ddgst": false 00:15:03.275 }, 00:15:03.275 "method": "bdev_nvme_attach_controller" 00:15:03.275 }' 00:15:03.275 23:59:33 -- nvmf/common.sh@545 -- # jq . 00:15:03.275 23:59:33 -- nvmf/common.sh@546 -- # IFS=, 00:15:03.275 23:59:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:03.275 "params": { 00:15:03.275 "name": "Nvme1", 00:15:03.275 "trtype": "tcp", 00:15:03.275 "traddr": "10.0.0.2", 00:15:03.275 "adrfam": "ipv4", 00:15:03.275 "trsvcid": "4420", 00:15:03.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.275 "hdgst": false, 00:15:03.275 "ddgst": false 00:15:03.275 }, 00:15:03.275 "method": "bdev_nvme_attach_controller" 00:15:03.275 }' 00:15:03.275 23:59:33 -- nvmf/common.sh@546 -- # IFS=, 00:15:03.275 23:59:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:03.275 "params": { 00:15:03.275 "name": "Nvme1", 00:15:03.276 "trtype": "tcp", 00:15:03.276 "traddr": "10.0.0.2", 00:15:03.276 "adrfam": "ipv4", 00:15:03.276 "trsvcid": "4420", 00:15:03.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.276 "hdgst": false, 00:15:03.276 "ddgst": false 00:15:03.276 }, 00:15:03.276 "method": "bdev_nvme_attach_controller" 00:15:03.276 }' 00:15:03.276 23:59:33 -- nvmf/common.sh@546 -- # IFS=, 00:15:03.276 23:59:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:03.276 "params": { 00:15:03.276 "name": "Nvme1", 00:15:03.276 "trtype": "tcp", 00:15:03.276 "traddr": "10.0.0.2", 00:15:03.276 "adrfam": "ipv4", 00:15:03.276 "trsvcid": "4420", 00:15:03.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.276 "hdgst": false, 00:15:03.276 "ddgst": false 00:15:03.276 }, 00:15:03.276 "method": "bdev_nvme_attach_controller" 00:15:03.276 }' 00:15:03.276 [2024-04-26 23:59:33.422403] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:15:03.276 [2024-04-26 23:59:33.422456] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:03.276 [2024-04-26 23:59:33.426250] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:15:03.276 [2024-04-26 23:59:33.426295] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:03.276 [2024-04-26 23:59:33.431810] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:15:03.276 [2024-04-26 23:59:33.431885] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:03.276 [2024-04-26 23:59:33.435162] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:15:03.276 [2024-04-26 23:59:33.435220] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:03.276 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.537 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.537 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.537 [2024-04-26 23:59:33.566520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.537 [2024-04-26 23:59:33.613576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.537 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.537 [2024-04-26 23:59:33.618463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:03.537 [2024-04-26 23:59:33.660130] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.537 [2024-04-26 23:59:33.664292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:03.537 [2024-04-26 23:59:33.709417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:03.537 [2024-04-26 23:59:33.709878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.799 [2024-04-26 23:59:33.758626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:03.799 Running I/O for 1 seconds... 00:15:03.799 Running I/O for 1 seconds... 00:15:03.799 Running I/O for 1 seconds... 00:15:03.799 Running I/O for 1 seconds... 00:15:04.742 00:15:04.742 Latency(us) 00:15:04.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.742 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:04.742 Nvme1n1 : 1.00 14905.05 58.22 0.00 0.00 8564.70 4560.21 17039.36 00:15:04.742 =================================================================================================================== 00:15:04.742 Total : 14905.05 58.22 0.00 0.00 8564.70 4560.21 17039.36 00:15:04.742 00:15:04.742 Latency(us) 00:15:04.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.742 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:04.742 Nvme1n1 : 1.01 11854.89 46.31 0.00 0.00 10758.29 6662.83 19005.44 00:15:04.742 =================================================================================================================== 00:15:04.742 Total : 11854.89 46.31 0.00 0.00 10758.29 6662.83 19005.44 00:15:04.742 00:15:04.742 Latency(us) 00:15:04.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.742 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:04.742 Nvme1n1 : 1.00 16908.64 66.05 0.00 0.00 7551.61 3604.48 19333.12 00:15:04.742 =================================================================================================================== 00:15:04.743 Total : 16908.64 66.05 0.00 0.00 7551.61 3604.48 19333.12 00:15:04.743 00:15:04.743 Latency(us) 00:15:04.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.743 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:04.743 Nvme1n1 : 1.00 191511.65 748.09 0.00 0.00 665.75 259.41 761.17 00:15:04.743 =================================================================================================================== 00:15:04.743 Total : 191511.65 748.09 0.00 0.00 665.75 259.41 761.17 00:15:05.004 23:59:35 -- target/bdev_io_wait.sh@38 -- # wait 352974 00:15:05.004 23:59:35 -- target/bdev_io_wait.sh@39 -- # wait 352976 00:15:05.004 23:59:35 -- target/bdev_io_wait.sh@40 -- # wait 352979 00:15:05.004 23:59:35 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.004 23:59:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.004 23:59:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.004 23:59:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.004 23:59:35 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:05.004 23:59:35 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:05.004 23:59:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:05.004 23:59:35 -- nvmf/common.sh@117 -- # sync 00:15:05.004 23:59:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.004 23:59:35 -- nvmf/common.sh@120 -- # set +e 00:15:05.004 23:59:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.004 23:59:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.004 rmmod nvme_tcp 00:15:05.004 rmmod nvme_fabrics 00:15:05.004 rmmod nvme_keyring 00:15:05.004 23:59:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.004 23:59:35 -- nvmf/common.sh@124 -- # set -e 00:15:05.004 23:59:35 -- nvmf/common.sh@125 -- # return 0 00:15:05.004 23:59:35 -- nvmf/common.sh@478 -- # '[' -n 352771 ']' 00:15:05.004 23:59:35 -- nvmf/common.sh@479 -- # killprocess 352771 00:15:05.004 23:59:35 -- common/autotest_common.sh@936 -- # '[' -z 352771 ']' 00:15:05.004 23:59:35 -- common/autotest_common.sh@940 -- # kill -0 352771 00:15:05.004 23:59:35 -- common/autotest_common.sh@941 -- # uname 00:15:05.004 23:59:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:05.004 23:59:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 352771 00:15:05.004 23:59:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:05.004 23:59:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:05.004 23:59:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 352771' 00:15:05.004 killing process with pid 352771 00:15:05.266 23:59:35 -- common/autotest_common.sh@955 -- # kill 352771 00:15:05.266 23:59:35 -- common/autotest_common.sh@960 -- # wait 352771 00:15:05.266 23:59:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:05.266 23:59:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:05.266 23:59:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:05.266 23:59:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.266 23:59:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.266 23:59:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.266 23:59:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.266 23:59:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.805 23:59:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:07.805 00:15:07.805 real 0m12.297s 00:15:07.805 user 0m18.429s 00:15:07.805 sys 0m6.621s 00:15:07.805 23:59:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:07.805 23:59:37 -- common/autotest_common.sh@10 -- # set +x 00:15:07.805 ************************************ 00:15:07.805 END TEST nvmf_bdev_io_wait 00:15:07.805 ************************************ 00:15:07.805 23:59:37 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:07.805 23:59:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:07.805 23:59:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.805 23:59:37 -- common/autotest_common.sh@10 -- # set +x 00:15:07.805 ************************************ 00:15:07.805 START TEST nvmf_queue_depth 00:15:07.805 ************************************ 00:15:07.805 23:59:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:07.805 * Looking for test storage... 00:15:07.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.805 23:59:37 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.805 23:59:37 -- nvmf/common.sh@7 -- # uname -s 00:15:07.805 23:59:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.805 23:59:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.805 23:59:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.805 23:59:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.805 23:59:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.805 23:59:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.805 23:59:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.805 23:59:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.805 23:59:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.805 23:59:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.805 23:59:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:07.805 23:59:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:07.805 23:59:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.805 23:59:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.805 23:59:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.805 23:59:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.805 23:59:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.805 23:59:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.805 23:59:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.805 23:59:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.805 23:59:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.805 23:59:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.805 23:59:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.805 23:59:37 -- paths/export.sh@5 -- # export PATH 00:15:07.805 23:59:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.805 23:59:37 -- nvmf/common.sh@47 -- # : 0 00:15:07.805 23:59:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.805 23:59:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.805 23:59:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.805 23:59:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.805 23:59:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.805 23:59:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.805 23:59:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.805 23:59:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.805 23:59:37 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:07.805 23:59:37 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:07.805 23:59:37 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:07.805 23:59:37 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:07.805 23:59:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:07.805 23:59:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.805 23:59:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:07.805 23:59:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:07.805 23:59:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:07.805 23:59:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.805 23:59:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.805 23:59:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.805 23:59:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:07.805 23:59:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:07.805 23:59:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.805 23:59:37 -- common/autotest_common.sh@10 -- # set +x 00:15:14.391 23:59:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:14.391 23:59:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:14.391 23:59:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:14.391 23:59:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:14.391 23:59:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:14.391 23:59:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:14.391 23:59:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:14.391 23:59:44 -- nvmf/common.sh@295 -- # net_devs=() 00:15:14.391 23:59:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:14.391 23:59:44 -- nvmf/common.sh@296 -- # e810=() 00:15:14.391 23:59:44 -- nvmf/common.sh@296 -- # local -ga e810 00:15:14.391 23:59:44 -- nvmf/common.sh@297 -- # x722=() 00:15:14.391 23:59:44 -- nvmf/common.sh@297 -- # local -ga x722 00:15:14.391 23:59:44 -- nvmf/common.sh@298 -- # mlx=() 00:15:14.391 23:59:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:14.391 23:59:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.391 23:59:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.391 23:59:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.391 23:59:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.391 23:59:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.391 23:59:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.391 23:59:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.391 23:59:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.391 23:59:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.392 23:59:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.392 23:59:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.392 23:59:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:14.392 23:59:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:14.392 23:59:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:14.392 23:59:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.392 23:59:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:14.392 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:14.392 23:59:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.392 23:59:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:14.392 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:14.392 23:59:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:14.392 23:59:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.392 23:59:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.392 23:59:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:14.392 23:59:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.392 23:59:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:14.392 Found net devices under 0000:31:00.0: cvl_0_0 00:15:14.392 23:59:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.392 23:59:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.392 23:59:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.392 23:59:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:14.392 23:59:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.392 23:59:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:14.392 Found net devices under 0000:31:00.1: cvl_0_1 00:15:14.392 23:59:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.392 23:59:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:14.392 23:59:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:14.392 23:59:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:14.392 23:59:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:14.392 23:59:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.392 23:59:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.392 23:59:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.392 23:59:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:14.392 23:59:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.392 23:59:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.392 23:59:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:14.392 23:59:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.392 23:59:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.392 23:59:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:14.392 23:59:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:14.392 23:59:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.392 23:59:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:14.392 23:59:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:14.392 23:59:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:14.653 23:59:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:14.653 23:59:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:14.653 23:59:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:14.653 23:59:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:14.653 23:59:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:14.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:15:14.653 00:15:14.653 --- 10.0.0.2 ping statistics --- 00:15:14.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.653 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:15:14.653 23:59:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:14.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:15:14.653 00:15:14.653 --- 10.0.0.1 ping statistics --- 00:15:14.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.653 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:15:14.653 23:59:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.653 23:59:44 -- nvmf/common.sh@411 -- # return 0 00:15:14.653 23:59:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:14.653 23:59:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.653 23:59:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:14.653 23:59:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:14.653 23:59:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.653 23:59:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:14.653 23:59:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:14.653 23:59:44 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:14.653 23:59:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:14.653 23:59:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:14.653 23:59:44 -- common/autotest_common.sh@10 -- # set +x 00:15:14.653 23:59:44 -- nvmf/common.sh@470 -- # nvmfpid=357646 00:15:14.653 23:59:44 -- nvmf/common.sh@471 -- # waitforlisten 357646 00:15:14.653 23:59:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:14.653 23:59:44 -- common/autotest_common.sh@817 -- # '[' -z 357646 ']' 00:15:14.653 23:59:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.653 23:59:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:14.653 23:59:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.653 23:59:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:14.653 23:59:44 -- common/autotest_common.sh@10 -- # set +x 00:15:14.653 [2024-04-26 23:59:44.861053] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:15:14.653 [2024-04-26 23:59:44.861121] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.913 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.913 [2024-04-26 23:59:44.932144] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.913 [2024-04-26 23:59:45.004708] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.913 [2024-04-26 23:59:45.004746] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.913 [2024-04-26 23:59:45.004753] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.913 [2024-04-26 23:59:45.004760] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.913 [2024-04-26 23:59:45.004765] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.913 [2024-04-26 23:59:45.004783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.484 23:59:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:15.484 23:59:45 -- common/autotest_common.sh@850 -- # return 0 00:15:15.484 23:59:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:15.484 23:59:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:15.484 23:59:45 -- common/autotest_common.sh@10 -- # set +x 00:15:15.484 23:59:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.484 23:59:45 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.484 23:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.484 23:59:45 -- common/autotest_common.sh@10 -- # set +x 00:15:15.484 [2024-04-26 23:59:45.687580] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.484 23:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.484 23:59:45 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:15.484 23:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.484 23:59:45 -- common/autotest_common.sh@10 -- # set +x 00:15:15.744 Malloc0 00:15:15.744 23:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.744 23:59:45 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:15.744 23:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.744 23:59:45 -- common/autotest_common.sh@10 -- # set +x 00:15:15.744 23:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.744 23:59:45 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.744 23:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.744 23:59:45 -- common/autotest_common.sh@10 -- # set +x 00:15:15.744 23:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.744 23:59:45 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.744 23:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.744 23:59:45 -- common/autotest_common.sh@10 -- # set +x 00:15:15.744 [2024-04-26 23:59:45.754858] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.744 23:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.744 23:59:45 -- target/queue_depth.sh@30 -- # bdevperf_pid=357760 00:15:15.744 23:59:45 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:15.744 23:59:45 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:15.744 23:59:45 -- target/queue_depth.sh@33 -- # waitforlisten 357760 /var/tmp/bdevperf.sock 00:15:15.744 23:59:45 -- common/autotest_common.sh@817 -- # '[' -z 357760 ']' 00:15:15.744 23:59:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.744 23:59:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:15.744 23:59:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.744 23:59:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:15.744 23:59:45 -- common/autotest_common.sh@10 -- # set +x 00:15:15.744 [2024-04-26 23:59:45.805240] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:15:15.744 [2024-04-26 23:59:45.805291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357760 ] 00:15:15.744 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.744 [2024-04-26 23:59:45.864977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.744 [2024-04-26 23:59:45.929320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.684 23:59:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:16.684 23:59:46 -- common/autotest_common.sh@850 -- # return 0 00:15:16.684 23:59:46 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:16.684 23:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.684 23:59:46 -- common/autotest_common.sh@10 -- # set +x 00:15:16.684 NVMe0n1 00:15:16.684 23:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.684 23:59:46 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:16.944 Running I/O for 10 seconds... 00:15:26.986 00:15:26.986 Latency(us) 00:15:26.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.986 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:26.986 Verification LBA range: start 0x0 length 0x4000 00:15:26.986 NVMe0n1 : 10.08 9548.38 37.30 0.00 0.00 106847.64 24139.09 79953.92 00:15:26.986 =================================================================================================================== 00:15:26.986 Total : 9548.38 37.30 0.00 0.00 106847.64 24139.09 79953.92 00:15:26.986 0 00:15:26.986 23:59:57 -- target/queue_depth.sh@39 -- # killprocess 357760 00:15:26.986 23:59:57 -- common/autotest_common.sh@936 -- # '[' -z 357760 ']' 00:15:26.986 23:59:57 -- common/autotest_common.sh@940 -- # kill -0 357760 00:15:26.986 23:59:57 -- common/autotest_common.sh@941 -- # uname 00:15:26.986 23:59:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.986 23:59:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 357760 00:15:26.986 23:59:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:26.986 23:59:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:26.986 23:59:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 357760' 00:15:26.986 killing process with pid 357760 00:15:26.986 23:59:57 -- common/autotest_common.sh@955 -- # kill 357760 00:15:26.986 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.986 00:15:26.986 Latency(us) 00:15:26.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.986 =================================================================================================================== 00:15:26.986 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.986 23:59:57 -- common/autotest_common.sh@960 -- # wait 357760 00:15:27.246 23:59:57 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:27.246 23:59:57 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:27.246 23:59:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:27.246 23:59:57 -- nvmf/common.sh@117 -- # sync 00:15:27.246 23:59:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.246 23:59:57 -- nvmf/common.sh@120 -- # set +e 00:15:27.246 23:59:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.246 23:59:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.246 rmmod nvme_tcp 00:15:27.246 rmmod nvme_fabrics 00:15:27.246 rmmod nvme_keyring 00:15:27.246 23:59:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.246 23:59:57 -- nvmf/common.sh@124 -- # set -e 00:15:27.246 23:59:57 -- nvmf/common.sh@125 -- # return 0 00:15:27.246 23:59:57 -- nvmf/common.sh@478 -- # '[' -n 357646 ']' 00:15:27.246 23:59:57 -- nvmf/common.sh@479 -- # killprocess 357646 00:15:27.246 23:59:57 -- common/autotest_common.sh@936 -- # '[' -z 357646 ']' 00:15:27.246 23:59:57 -- common/autotest_common.sh@940 -- # kill -0 357646 00:15:27.246 23:59:57 -- common/autotest_common.sh@941 -- # uname 00:15:27.246 23:59:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.246 23:59:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 357646 00:15:27.246 23:59:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:27.246 23:59:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:27.246 23:59:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 357646' 00:15:27.246 killing process with pid 357646 00:15:27.246 23:59:57 -- common/autotest_common.sh@955 -- # kill 357646 00:15:27.246 23:59:57 -- common/autotest_common.sh@960 -- # wait 357646 00:15:27.507 23:59:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:27.507 23:59:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:27.507 23:59:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:27.507 23:59:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.507 23:59:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.507 23:59:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.507 23:59:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.507 23:59:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.420 23:59:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.420 00:15:29.420 real 0m21.947s 00:15:29.420 user 0m25.935s 00:15:29.420 sys 0m6.210s 00:15:29.420 23:59:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:29.420 23:59:59 -- common/autotest_common.sh@10 -- # set +x 00:15:29.420 ************************************ 00:15:29.420 END TEST nvmf_queue_depth 00:15:29.420 ************************************ 00:15:29.420 23:59:59 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:29.420 23:59:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:29.420 23:59:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:29.420 23:59:59 -- common/autotest_common.sh@10 -- # set +x 00:15:29.681 ************************************ 00:15:29.681 START TEST nvmf_multipath 00:15:29.681 ************************************ 00:15:29.681 23:59:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:29.681 * Looking for test storage... 00:15:29.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.681 23:59:59 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.681 23:59:59 -- nvmf/common.sh@7 -- # uname -s 00:15:29.681 23:59:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.681 23:59:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.681 23:59:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.681 23:59:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.681 23:59:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.681 23:59:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.681 23:59:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.681 23:59:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.681 23:59:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.681 23:59:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.681 23:59:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.681 23:59:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.681 23:59:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.681 23:59:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.681 23:59:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.681 23:59:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.681 23:59:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.682 23:59:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.682 23:59:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.682 23:59:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.682 23:59:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.682 23:59:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.682 23:59:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.682 23:59:59 -- paths/export.sh@5 -- # export PATH 00:15:29.682 23:59:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.682 23:59:59 -- nvmf/common.sh@47 -- # : 0 00:15:29.682 23:59:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.682 23:59:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.682 23:59:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.682 23:59:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.682 23:59:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.682 23:59:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.682 23:59:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.682 23:59:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.682 23:59:59 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.682 23:59:59 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.682 23:59:59 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:29.682 23:59:59 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.682 23:59:59 -- target/multipath.sh@43 -- # nvmftestinit 00:15:29.682 23:59:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:29.682 23:59:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.682 23:59:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:29.682 23:59:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:29.682 23:59:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:29.682 23:59:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.682 23:59:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.682 23:59:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.682 23:59:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:29.682 23:59:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:29.682 23:59:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:29.682 23:59:59 -- common/autotest_common.sh@10 -- # set +x 00:15:37.819 00:00:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:37.819 00:00:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.819 00:00:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.819 00:00:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.819 00:00:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.819 00:00:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.819 00:00:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.819 00:00:06 -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.819 00:00:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.819 00:00:06 -- nvmf/common.sh@296 -- # e810=() 00:15:37.819 00:00:06 -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.819 00:00:06 -- nvmf/common.sh@297 -- # x722=() 00:15:37.819 00:00:06 -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.819 00:00:06 -- nvmf/common.sh@298 -- # mlx=() 00:15:37.819 00:00:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.819 00:00:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.819 00:00:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.819 00:00:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.819 00:00:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.819 00:00:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.819 00:00:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.819 00:00:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.819 00:00:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.819 00:00:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.819 00:00:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.819 00:00:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.819 00:00:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:37.819 00:00:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:37.819 00:00:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:37.819 00:00:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.819 00:00:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:37.819 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:37.819 00:00:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.819 00:00:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:37.819 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:37.819 00:00:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:37.819 00:00:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.819 00:00:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.819 00:00:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:37.819 00:00:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.819 00:00:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:37.819 Found net devices under 0000:31:00.0: cvl_0_0 00:15:37.819 00:00:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.819 00:00:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.819 00:00:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.819 00:00:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:37.819 00:00:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.819 00:00:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:37.819 Found net devices under 0000:31:00.1: cvl_0_1 00:15:37.819 00:00:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.819 00:00:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:37.819 00:00:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:37.819 00:00:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:37.819 00:00:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.819 00:00:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.819 00:00:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.819 00:00:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:37.819 00:00:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.819 00:00:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.819 00:00:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:37.819 00:00:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.819 00:00:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.819 00:00:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:37.819 00:00:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:37.819 00:00:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.819 00:00:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.819 00:00:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.819 00:00:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.819 00:00:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:37.819 00:00:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.819 00:00:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.819 00:00:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.819 00:00:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:37.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:15:37.819 00:15:37.819 --- 10.0.0.2 ping statistics --- 00:15:37.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.819 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:15:37.819 00:00:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:15:37.819 00:15:37.819 --- 10.0.0.1 ping statistics --- 00:15:37.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.819 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:15:37.819 00:00:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.819 00:00:06 -- nvmf/common.sh@411 -- # return 0 00:15:37.819 00:00:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:37.819 00:00:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.819 00:00:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:37.819 00:00:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.819 00:00:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:37.819 00:00:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:37.819 00:00:06 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:37.819 00:00:06 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:37.819 only one NIC for nvmf test 00:15:37.819 00:00:06 -- target/multipath.sh@47 -- # nvmftestfini 00:15:37.819 00:00:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:37.819 00:00:06 -- nvmf/common.sh@117 -- # sync 00:15:37.819 00:00:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:37.819 00:00:06 -- nvmf/common.sh@120 -- # set +e 00:15:37.819 00:00:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:37.819 00:00:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:37.819 rmmod nvme_tcp 00:15:37.819 rmmod nvme_fabrics 00:15:37.819 rmmod nvme_keyring 00:15:37.819 00:00:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:37.819 00:00:07 -- nvmf/common.sh@124 -- # set -e 00:15:37.819 00:00:07 -- nvmf/common.sh@125 -- # return 0 00:15:37.819 00:00:07 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:37.819 00:00:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:37.819 00:00:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:37.819 00:00:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:37.819 00:00:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:37.819 00:00:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:37.819 00:00:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.819 00:00:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.819 00:00:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.203 00:00:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:39.203 00:00:09 -- target/multipath.sh@48 -- # exit 0 00:15:39.203 00:00:09 -- target/multipath.sh@1 -- # nvmftestfini 00:15:39.203 00:00:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:39.203 00:00:09 -- nvmf/common.sh@117 -- # sync 00:15:39.203 00:00:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.203 00:00:09 -- nvmf/common.sh@120 -- # set +e 00:15:39.203 00:00:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.203 00:00:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.203 00:00:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.203 00:00:09 -- nvmf/common.sh@124 -- # set -e 00:15:39.203 00:00:09 -- nvmf/common.sh@125 -- # return 0 00:15:39.203 00:00:09 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:39.203 00:00:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:39.203 00:00:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:39.203 00:00:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:39.203 00:00:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.203 00:00:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.203 00:00:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.203 00:00:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.203 00:00:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.203 00:00:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:39.203 00:15:39.203 real 0m9.429s 00:15:39.203 user 0m2.016s 00:15:39.203 sys 0m5.273s 00:15:39.203 00:00:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:39.203 00:00:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.203 ************************************ 00:15:39.203 END TEST nvmf_multipath 00:15:39.203 ************************************ 00:15:39.203 00:00:09 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:39.203 00:00:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:39.203 00:00:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.203 00:00:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.203 ************************************ 00:15:39.203 START TEST nvmf_zcopy 00:15:39.203 ************************************ 00:15:39.203 00:00:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:39.464 * Looking for test storage... 00:15:39.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.464 00:00:09 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.464 00:00:09 -- nvmf/common.sh@7 -- # uname -s 00:15:39.464 00:00:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.464 00:00:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.464 00:00:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.464 00:00:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.464 00:00:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.464 00:00:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.464 00:00:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.464 00:00:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.464 00:00:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.464 00:00:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.464 00:00:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.464 00:00:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.464 00:00:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.464 00:00:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.464 00:00:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.464 00:00:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.464 00:00:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.464 00:00:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.464 00:00:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.464 00:00:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.464 00:00:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.464 00:00:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.464 00:00:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.464 00:00:09 -- paths/export.sh@5 -- # export PATH 00:15:39.464 00:00:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.464 00:00:09 -- nvmf/common.sh@47 -- # : 0 00:15:39.464 00:00:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.464 00:00:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.464 00:00:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.464 00:00:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.464 00:00:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.464 00:00:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.464 00:00:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.464 00:00:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.464 00:00:09 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:39.464 00:00:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:39.464 00:00:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.464 00:00:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:39.464 00:00:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:39.464 00:00:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:39.464 00:00:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.464 00:00:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.464 00:00:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.464 00:00:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:39.464 00:00:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:39.464 00:00:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:39.464 00:00:09 -- common/autotest_common.sh@10 -- # set +x 00:15:47.600 00:00:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:47.600 00:00:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:47.600 00:00:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:47.600 00:00:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:47.600 00:00:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:47.600 00:00:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:47.600 00:00:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:47.600 00:00:16 -- nvmf/common.sh@295 -- # net_devs=() 00:15:47.600 00:00:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:47.600 00:00:16 -- nvmf/common.sh@296 -- # e810=() 00:15:47.600 00:00:16 -- nvmf/common.sh@296 -- # local -ga e810 00:15:47.600 00:00:16 -- nvmf/common.sh@297 -- # x722=() 00:15:47.600 00:00:16 -- nvmf/common.sh@297 -- # local -ga x722 00:15:47.600 00:00:16 -- nvmf/common.sh@298 -- # mlx=() 00:15:47.600 00:00:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:47.600 00:00:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.600 00:00:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.600 00:00:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.600 00:00:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.600 00:00:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.600 00:00:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.600 00:00:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.600 00:00:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.600 00:00:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.600 00:00:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.600 00:00:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.600 00:00:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:47.600 00:00:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:47.600 00:00:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:47.600 00:00:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.600 00:00:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:47.600 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:47.600 00:00:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.600 00:00:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:47.600 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:47.600 00:00:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:47.600 00:00:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:47.600 00:00:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.600 00:00:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.600 00:00:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:47.600 00:00:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.600 00:00:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:47.601 Found net devices under 0000:31:00.0: cvl_0_0 00:15:47.601 00:00:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.601 00:00:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.601 00:00:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.601 00:00:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:47.601 00:00:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.601 00:00:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:47.601 Found net devices under 0000:31:00.1: cvl_0_1 00:15:47.601 00:00:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.601 00:00:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:47.601 00:00:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:47.601 00:00:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:47.601 00:00:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:47.601 00:00:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:47.601 00:00:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.601 00:00:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.601 00:00:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.601 00:00:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:47.601 00:00:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.601 00:00:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.601 00:00:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:47.601 00:00:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.601 00:00:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.601 00:00:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:47.601 00:00:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:47.601 00:00:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.601 00:00:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.601 00:00:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.601 00:00:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.601 00:00:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:47.601 00:00:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.601 00:00:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.601 00:00:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.601 00:00:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:47.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:15:47.601 00:15:47.601 --- 10.0.0.2 ping statistics --- 00:15:47.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.601 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:15:47.601 00:00:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:15:47.601 00:15:47.601 --- 10.0.0.1 ping statistics --- 00:15:47.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.601 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:15:47.601 00:00:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.601 00:00:16 -- nvmf/common.sh@411 -- # return 0 00:15:47.601 00:00:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:47.601 00:00:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.601 00:00:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:47.601 00:00:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:47.601 00:00:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.601 00:00:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:47.601 00:00:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:47.601 00:00:16 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:47.601 00:00:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:47.601 00:00:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:47.601 00:00:16 -- common/autotest_common.sh@10 -- # set +x 00:15:47.601 00:00:16 -- nvmf/common.sh@470 -- # nvmfpid=369122 00:15:47.601 00:00:16 -- nvmf/common.sh@471 -- # waitforlisten 369122 00:15:47.601 00:00:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:47.601 00:00:16 -- common/autotest_common.sh@817 -- # '[' -z 369122 ']' 00:15:47.601 00:00:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.601 00:00:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:47.601 00:00:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.601 00:00:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:47.601 00:00:16 -- common/autotest_common.sh@10 -- # set +x 00:15:47.601 [2024-04-27 00:00:16.950990] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:15:47.601 [2024-04-27 00:00:16.951051] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.601 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.601 [2024-04-27 00:00:17.021156] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.601 [2024-04-27 00:00:17.094156] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.601 [2024-04-27 00:00:17.094194] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.601 [2024-04-27 00:00:17.094202] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.601 [2024-04-27 00:00:17.094208] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.601 [2024-04-27 00:00:17.094214] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.601 [2024-04-27 00:00:17.094238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.601 00:00:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:47.601 00:00:17 -- common/autotest_common.sh@850 -- # return 0 00:15:47.601 00:00:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:47.601 00:00:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:47.601 00:00:17 -- common/autotest_common.sh@10 -- # set +x 00:15:47.601 00:00:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.601 00:00:17 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:47.601 00:00:17 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:47.601 00:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.601 00:00:17 -- common/autotest_common.sh@10 -- # set +x 00:15:47.601 [2024-04-27 00:00:17.764648] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.601 00:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.601 00:00:17 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:47.601 00:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.601 00:00:17 -- common/autotest_common.sh@10 -- # set +x 00:15:47.601 00:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.601 00:00:17 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.601 00:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.601 00:00:17 -- common/autotest_common.sh@10 -- # set +x 00:15:47.601 [2024-04-27 00:00:17.788819] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.601 00:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.601 00:00:17 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:47.601 00:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.601 00:00:17 -- common/autotest_common.sh@10 -- # set +x 00:15:47.601 00:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.601 00:00:17 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:47.601 00:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.601 00:00:17 -- common/autotest_common.sh@10 -- # set +x 00:15:47.863 malloc0 00:15:47.863 00:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.863 00:00:17 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:47.863 00:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.863 00:00:17 -- common/autotest_common.sh@10 -- # set +x 00:15:47.863 00:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.863 00:00:17 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:47.863 00:00:17 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:47.863 00:00:17 -- nvmf/common.sh@521 -- # config=() 00:15:47.863 00:00:17 -- nvmf/common.sh@521 -- # local subsystem config 00:15:47.863 00:00:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:47.863 00:00:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:47.863 { 00:15:47.863 "params": { 00:15:47.863 "name": "Nvme$subsystem", 00:15:47.863 "trtype": "$TEST_TRANSPORT", 00:15:47.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:47.863 "adrfam": "ipv4", 00:15:47.863 "trsvcid": "$NVMF_PORT", 00:15:47.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:47.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:47.863 "hdgst": ${hdgst:-false}, 00:15:47.863 "ddgst": ${ddgst:-false} 00:15:47.863 }, 00:15:47.863 "method": "bdev_nvme_attach_controller" 00:15:47.863 } 00:15:47.863 EOF 00:15:47.863 )") 00:15:47.863 00:00:17 -- nvmf/common.sh@543 -- # cat 00:15:47.863 00:00:17 -- nvmf/common.sh@545 -- # jq . 00:15:47.863 00:00:17 -- nvmf/common.sh@546 -- # IFS=, 00:15:47.863 00:00:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:47.863 "params": { 00:15:47.863 "name": "Nvme1", 00:15:47.863 "trtype": "tcp", 00:15:47.863 "traddr": "10.0.0.2", 00:15:47.863 "adrfam": "ipv4", 00:15:47.863 "trsvcid": "4420", 00:15:47.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:47.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:47.863 "hdgst": false, 00:15:47.863 "ddgst": false 00:15:47.863 }, 00:15:47.863 "method": "bdev_nvme_attach_controller" 00:15:47.863 }' 00:15:47.863 [2024-04-27 00:00:17.889038] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:15:47.863 [2024-04-27 00:00:17.889084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369352 ] 00:15:47.863 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.863 [2024-04-27 00:00:17.947876] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.863 [2024-04-27 00:00:18.012248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.122 Running I/O for 10 seconds... 00:15:58.198 00:15:58.198 Latency(us) 00:15:58.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.198 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:58.198 Verification LBA range: start 0x0 length 0x1000 00:15:58.198 Nvme1n1 : 10.01 7041.28 55.01 0.00 0.00 18122.40 1126.40 27634.35 00:15:58.198 =================================================================================================================== 00:15:58.198 Total : 7041.28 55.01 0.00 0.00 18122.40 1126.40 27634.35 00:15:58.198 00:00:28 -- target/zcopy.sh@39 -- # perfpid=371437 00:15:58.198 00:00:28 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:58.198 00:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:58.198 00:00:28 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:58.198 00:00:28 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:58.198 00:00:28 -- nvmf/common.sh@521 -- # config=() 00:15:58.198 00:00:28 -- nvmf/common.sh@521 -- # local subsystem config 00:15:58.198 00:00:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:58.198 00:00:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:58.198 { 00:15:58.198 "params": { 00:15:58.198 "name": "Nvme$subsystem", 00:15:58.198 "trtype": "$TEST_TRANSPORT", 00:15:58.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:58.198 "adrfam": "ipv4", 00:15:58.198 "trsvcid": "$NVMF_PORT", 00:15:58.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:58.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:58.198 "hdgst": ${hdgst:-false}, 00:15:58.198 "ddgst": ${ddgst:-false} 00:15:58.198 }, 00:15:58.198 "method": "bdev_nvme_attach_controller" 00:15:58.198 } 00:15:58.198 EOF 00:15:58.198 )") 00:15:58.198 00:00:28 -- nvmf/common.sh@543 -- # cat 00:15:58.198 [2024-04-27 00:00:28.328311] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.198 [2024-04-27 00:00:28.328347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.198 00:00:28 -- nvmf/common.sh@545 -- # jq . 00:15:58.198 00:00:28 -- nvmf/common.sh@546 -- # IFS=, 00:15:58.198 00:00:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:58.198 "params": { 00:15:58.198 "name": "Nvme1", 00:15:58.198 "trtype": "tcp", 00:15:58.198 "traddr": "10.0.0.2", 00:15:58.198 "adrfam": "ipv4", 00:15:58.198 "trsvcid": "4420", 00:15:58.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:58.198 "hdgst": false, 00:15:58.198 "ddgst": false 00:15:58.198 }, 00:15:58.198 "method": "bdev_nvme_attach_controller" 00:15:58.198 }' 00:15:58.198 [2024-04-27 00:00:28.340314] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.198 [2024-04-27 00:00:28.340325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.198 [2024-04-27 00:00:28.352345] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.198 [2024-04-27 00:00:28.352355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.198 [2024-04-27 00:00:28.364377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.198 [2024-04-27 00:00:28.364387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.198 [2024-04-27 00:00:28.369878] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:15:58.198 [2024-04-27 00:00:28.369923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid371437 ] 00:15:58.198 [2024-04-27 00:00:28.376407] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.198 [2024-04-27 00:00:28.376417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.198 [2024-04-27 00:00:28.388438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.198 [2024-04-27 00:00:28.388448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.198 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.198 [2024-04-27 00:00:28.400471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.198 [2024-04-27 00:00:28.400481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.198 [2024-04-27 00:00:28.412501] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.198 [2024-04-27 00:00:28.412510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.424547] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.424557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.428385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.458 [2024-04-27 00:00:28.436567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.436580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.448597] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.448608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.460632] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.460645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.472662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.472675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.484694] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.484705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.492530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.458 [2024-04-27 00:00:28.496725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.496735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.508763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.508777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.520794] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.520807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.532823] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.532835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.544860] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.544871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.556898] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.556908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.568934] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.568951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.580957] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.580969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.592990] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.593002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.605021] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.605033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.617053] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.617065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.458 [2024-04-27 00:00:28.670329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.458 [2024-04-27 00:00:28.670346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.718 [2024-04-27 00:00:28.681232] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.718 [2024-04-27 00:00:28.681244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.718 Running I/O for 5 seconds... 00:15:58.718 [2024-04-27 00:00:28.698365] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.718 [2024-04-27 00:00:28.698384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.718 [2024-04-27 00:00:28.714889] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.714908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.725901] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.725919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.742604] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.742624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.758693] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.758712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.776143] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.776162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.792867] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.792885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.805122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.805140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.819821] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.819847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.836062] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.836079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.853581] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.853599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.870109] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.870127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.886600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.886618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.903578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.903596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.920302] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.920321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.719 [2024-04-27 00:00:28.937128] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.719 [2024-04-27 00:00:28.937146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:28.953661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:28.953679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:28.970666] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:28.970684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:28.987392] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:28.987410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.004262] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.004280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.020910] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.020928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.037186] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.037204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.054263] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.054281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.071103] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.071121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.086889] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.086908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.097712] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.097730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.114295] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.114313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.130632] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.130654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.147490] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.147508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.164512] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.164530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.181300] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.181318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.979 [2024-04-27 00:00:29.198439] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.979 [2024-04-27 00:00:29.198457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.238 [2024-04-27 00:00:29.215253] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.238 [2024-04-27 00:00:29.215271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.238 [2024-04-27 00:00:29.232217] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.238 [2024-04-27 00:00:29.232235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.238 [2024-04-27 00:00:29.248726] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.238 [2024-04-27 00:00:29.248743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.265658] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.265676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.282508] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.282526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.298816] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.298834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.315981] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.315998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.333500] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.333518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.350473] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.350490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.367403] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.367421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.384048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.384065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.401168] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.401186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.418243] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.418261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.435139] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.435157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.239 [2024-04-27 00:00:29.451771] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.239 [2024-04-27 00:00:29.451794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.468409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.468427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.485287] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.485305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.502192] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.502210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.519430] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.519448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.536034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.536051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.553349] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.553367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.570229] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.570246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.587326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.587344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.604317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.604336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.620747] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.620766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.631955] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.631973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.648268] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.648286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.664900] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.664918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.681730] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.681748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.698178] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.698196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.499 [2024-04-27 00:00:29.714962] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.499 [2024-04-27 00:00:29.714979] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.731905] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.731923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.748565] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.748584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.764553] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.764571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.782038] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.782057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.798781] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.798799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.815142] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.815160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.826135] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.826152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.842032] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.842050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.858433] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.858451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.873503] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.873520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.890461] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.890479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.907218] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.907236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.924364] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.924382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.940971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.940990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.952000] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.952018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.759 [2024-04-27 00:00:29.968313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.759 [2024-04-27 00:00:29.968331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:29.985011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:29.985029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.001287] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.001306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.018340] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.018359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.035394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.035413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.051641] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.051660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.068161] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.068180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.085115] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.085134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.102266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.102285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.118970] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.118988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.135903] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.135922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.152784] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.152802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.169715] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.169733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.185754] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.185772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.202246] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.202264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.219528] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.219545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.019 [2024-04-27 00:00:30.235883] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.019 [2024-04-27 00:00:30.235901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.253238] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.253256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.269704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.269722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.286790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.286808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.303404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.303422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.320181] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.320199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.336698] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.336716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.353551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.353569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.370357] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.370375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.386663] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.386681] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.403581] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.403599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.420790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.420808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.437385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.437403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.454214] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.454231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.470845] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.470863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.279 [2024-04-27 00:00:30.487575] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.279 [2024-04-27 00:00:30.487593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.504189] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.504207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.521116] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.521134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.537642] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.537660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.554977] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.554995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.572196] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.572214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.588876] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.588894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.606011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.606028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.623096] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.623114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.639422] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.639439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.656624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.656642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.673176] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.673193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.689980] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.689999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.706875] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.706893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.723316] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.723334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.740329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.740348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.540 [2024-04-27 00:00:30.757169] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.540 [2024-04-27 00:00:30.757188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.773831] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.773854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.790939] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.790957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.807515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.807532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.824952] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.824969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.841648] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.841666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.857587] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.857605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.868855] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.868872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.884450] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.884468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.901985] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.902003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.919074] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.919091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.936005] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.936023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.952635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.952653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.969521] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.969539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:30.986036] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:30.986053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:31.003420] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:31.003443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.800 [2024-04-27 00:00:31.019913] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.800 [2024-04-27 00:00:31.019930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.061 [2024-04-27 00:00:31.036952] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.061 [2024-04-27 00:00:31.036970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.061 [2024-04-27 00:00:31.053354] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.053371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.070172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.070189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.086829] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.086850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.103690] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.103708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.120501] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.120519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.137172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.137190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.154003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.154021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.171105] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.171123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.187675] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.187693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.204378] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.204396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.220976] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.220994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.237061] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.237079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.254099] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.254117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.062 [2024-04-27 00:00:31.270175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.062 [2024-04-27 00:00:31.270193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.282175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.282193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.298121] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.298139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.314461] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.314484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.331605] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.331623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.347553] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.347571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.358654] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.358672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.375436] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.375455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.391482] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.391500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.408443] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.408461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.425327] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.425345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.441645] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.441662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.457978] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.457997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.475254] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.475272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.491993] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.492011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.508385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.508404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.519539] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.519556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.321 [2024-04-27 00:00:31.536880] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.321 [2024-04-27 00:00:31.536899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.552767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.552785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.569583] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.569600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.586541] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.586559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.602776] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.602793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.619593] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.619615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.637104] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.637122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.653724] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.653741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.671112] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.671130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.687748] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.687766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.704567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.704584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.721161] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.721178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.738258] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.738276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.754989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.755007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.771529] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.771546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.580 [2024-04-27 00:00:31.788431] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.580 [2024-04-27 00:00:31.788449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.804863] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.804880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.816203] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.816221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.832777] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.832795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.847550] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.847569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.863164] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.863182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.879809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.879827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.896822] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.896844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.913699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.913716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.930169] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.930192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.946799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.946816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.964209] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.964227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.980985] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.981002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:31.997975] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:31.997993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:32.014452] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:32.014469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:32.026437] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:32.026455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:32.042852] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:32.042870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.840 [2024-04-27 00:00:32.059414] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.840 [2024-04-27 00:00:32.059431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.100 [2024-04-27 00:00:32.075446] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.100 [2024-04-27 00:00:32.075464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.100 [2024-04-27 00:00:32.092275] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.100 [2024-04-27 00:00:32.092292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.100 [2024-04-27 00:00:32.108826] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.100 [2024-04-27 00:00:32.108850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.100 [2024-04-27 00:00:32.125814] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.100 [2024-04-27 00:00:32.125831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.100 [2024-04-27 00:00:32.142621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.100 [2024-04-27 00:00:32.142638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.100 [2024-04-27 00:00:32.159058] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.100 [2024-04-27 00:00:32.159075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.100 [2024-04-27 00:00:32.175623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.100 [2024-04-27 00:00:32.175640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.100 [2024-04-27 00:00:32.192468] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.100 [2024-04-27 00:00:32.192486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.101 [2024-04-27 00:00:32.209533] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.101 [2024-04-27 00:00:32.209551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.101 [2024-04-27 00:00:32.226102] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.101 [2024-04-27 00:00:32.226119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.101 [2024-04-27 00:00:32.243126] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.101 [2024-04-27 00:00:32.243144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.101 [2024-04-27 00:00:32.260190] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.101 [2024-04-27 00:00:32.260208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.101 [2024-04-27 00:00:32.276881] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.101 [2024-04-27 00:00:32.276898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.101 [2024-04-27 00:00:32.293400] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.101 [2024-04-27 00:00:32.293417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.101 [2024-04-27 00:00:32.310446] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.101 [2024-04-27 00:00:32.310463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.326902] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.326920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.343163] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.343181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.355747] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.355764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.372831] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.372854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.388653] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.388671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.399715] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.399733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.415420] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.415437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.432034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.432052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.448707] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.448724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.465873] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.465891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.482288] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.482305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.498740] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.498758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.515527] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.515545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.532159] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.532177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.549252] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.549269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.361 [2024-04-27 00:00:32.565861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.361 [2024-04-27 00:00:32.565878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.582706] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.582724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.599522] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.599539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.616188] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.616205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.633650] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.633668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.650191] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.650208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.666948] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.666965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.684161] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.684178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.701495] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.701512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.718085] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.718101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.735302] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.735320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.751413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.751430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.768348] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.768365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.784724] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.784742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.801975] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.801992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.818748] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.818766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.622 [2024-04-27 00:00:32.835715] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.622 [2024-04-27 00:00:32.835732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:32.852240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:32.852258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:32.869206] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:32.869225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:32.885670] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:32.885687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:32.902661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:32.902679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:32.919440] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:32.919458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:32.935196] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:32.935215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:32.945830] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:32.945853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:32.962741] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:32.962758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:32.979280] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:32.979298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:32.995620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:32.995638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:33.012612] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:33.012630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:33.028849] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:33.028867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:33.045617] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:33.045635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:33.062899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:33.062917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:33.079568] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:33.079585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.882 [2024-04-27 00:00:33.096882] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.882 [2024-04-27 00:00:33.096899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.113596] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.113614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.130270] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.130288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.146775] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.146793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.163810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.163828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.180440] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.180458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.197722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.197740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.214060] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.214078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.230967] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.230985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.247909] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.247926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.264322] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.264340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.280990] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.281007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.297734] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.297752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.314438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.314455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.331696] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.331715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.142 [2024-04-27 00:00:33.348174] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.142 [2024-04-27 00:00:33.348192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.402 [2024-04-27 00:00:33.365266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.402 [2024-04-27 00:00:33.365283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.402 [2024-04-27 00:00:33.382092] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.402 [2024-04-27 00:00:33.382110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.398938] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.398956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.415541] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.415559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.433011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.433029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.449519] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.449536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.466457] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.466474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.483810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.483832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.500484] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.500502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.517601] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.517618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.534066] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.534084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.551337] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.551354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.568361] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.568379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.585057] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.585074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.601266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.601284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.403 [2024-04-27 00:00:33.612021] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.403 [2024-04-27 00:00:33.612039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.627902] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.627920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.644974] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.644992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.661138] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.661157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.672117] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.672135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.688236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.688253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.702857] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.702874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 00:16:03.663 Latency(us) 00:16:03.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.663 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:03.663 Nvme1n1 : 5.01 13899.57 108.59 0.00 0.00 9198.59 4369.07 20316.16 00:16:03.663 =================================================================================================================== 00:16:03.663 Total : 13899.57 108.59 0.00 0.00 9198.59 4369.07 20316.16 00:16:03.663 [2024-04-27 00:00:33.712234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.712249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.724268] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.724290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.736299] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.736316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.748329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.748345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.760359] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.760371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.772392] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.663 [2024-04-27 00:00:33.772404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.663 [2024-04-27 00:00:33.784424] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.664 [2024-04-27 00:00:33.784435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.664 [2024-04-27 00:00:33.796458] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.664 [2024-04-27 00:00:33.796472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.664 [2024-04-27 00:00:33.808489] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.664 [2024-04-27 00:00:33.808500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.664 [2024-04-27 00:00:33.820524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.664 [2024-04-27 00:00:33.820536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.664 [2024-04-27 00:00:33.832554] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.664 [2024-04-27 00:00:33.832564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (371437) - No such process 00:16:03.664 00:00:33 -- target/zcopy.sh@49 -- # wait 371437 00:16:03.664 00:00:33 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.664 00:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.664 00:00:33 -- common/autotest_common.sh@10 -- # set +x 00:16:03.664 00:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.664 00:00:33 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:03.664 00:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.664 00:00:33 -- common/autotest_common.sh@10 -- # set +x 00:16:03.664 delay0 00:16:03.664 00:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.664 00:00:33 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:03.664 00:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.664 00:00:33 -- common/autotest_common.sh@10 -- # set +x 00:16:03.664 00:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.664 00:00:33 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:03.924 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.924 [2024-04-27 00:00:33.973391] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:12.065 Initializing NVMe Controllers 00:16:12.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:12.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:12.065 Initialization complete. Launching workers. 00:16:12.065 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 263, failed: 18666 00:16:12.065 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18832, failed to submit 97 00:16:12.065 success 18739, unsuccess 93, failed 0 00:16:12.065 00:00:41 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:12.065 00:00:41 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:12.065 00:00:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:12.065 00:00:41 -- nvmf/common.sh@117 -- # sync 00:16:12.065 00:00:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.065 00:00:41 -- nvmf/common.sh@120 -- # set +e 00:16:12.065 00:00:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.065 00:00:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.065 rmmod nvme_tcp 00:16:12.065 rmmod nvme_fabrics 00:16:12.065 rmmod nvme_keyring 00:16:12.065 00:00:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.065 00:00:41 -- nvmf/common.sh@124 -- # set -e 00:16:12.065 00:00:41 -- nvmf/common.sh@125 -- # return 0 00:16:12.065 00:00:41 -- nvmf/common.sh@478 -- # '[' -n 369122 ']' 00:16:12.065 00:00:41 -- nvmf/common.sh@479 -- # killprocess 369122 00:16:12.065 00:00:41 -- common/autotest_common.sh@936 -- # '[' -z 369122 ']' 00:16:12.065 00:00:41 -- common/autotest_common.sh@940 -- # kill -0 369122 00:16:12.065 00:00:41 -- common/autotest_common.sh@941 -- # uname 00:16:12.065 00:00:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:12.065 00:00:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 369122 00:16:12.065 00:00:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:12.065 00:00:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:12.065 00:00:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 369122' 00:16:12.065 killing process with pid 369122 00:16:12.065 00:00:41 -- common/autotest_common.sh@955 -- # kill 369122 00:16:12.065 00:00:41 -- common/autotest_common.sh@960 -- # wait 369122 00:16:12.065 00:00:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:12.065 00:00:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:12.065 00:00:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:12.065 00:00:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.065 00:00:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.065 00:00:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.065 00:00:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.065 00:00:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.449 00:00:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:13.449 00:16:13.449 real 0m34.009s 00:16:13.449 user 0m45.424s 00:16:13.449 sys 0m10.809s 00:16:13.449 00:00:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:13.449 00:00:43 -- common/autotest_common.sh@10 -- # set +x 00:16:13.449 ************************************ 00:16:13.449 END TEST nvmf_zcopy 00:16:13.449 ************************************ 00:16:13.449 00:00:43 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:13.449 00:00:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:13.449 00:00:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.449 00:00:43 -- common/autotest_common.sh@10 -- # set +x 00:16:13.449 ************************************ 00:16:13.449 START TEST nvmf_nmic 00:16:13.449 ************************************ 00:16:13.449 00:00:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:13.711 * Looking for test storage... 00:16:13.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.711 00:00:43 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.711 00:00:43 -- nvmf/common.sh@7 -- # uname -s 00:16:13.711 00:00:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.711 00:00:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.711 00:00:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.711 00:00:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.711 00:00:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.711 00:00:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.711 00:00:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.711 00:00:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.711 00:00:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.711 00:00:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.711 00:00:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:13.711 00:00:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:13.711 00:00:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.711 00:00:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.711 00:00:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.711 00:00:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.711 00:00:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.711 00:00:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.711 00:00:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.711 00:00:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.711 00:00:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.711 00:00:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.711 00:00:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.711 00:00:43 -- paths/export.sh@5 -- # export PATH 00:16:13.711 00:00:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.711 00:00:43 -- nvmf/common.sh@47 -- # : 0 00:16:13.711 00:00:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.711 00:00:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.711 00:00:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.711 00:00:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.711 00:00:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.711 00:00:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.711 00:00:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.711 00:00:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.711 00:00:43 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.711 00:00:43 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.711 00:00:43 -- target/nmic.sh@14 -- # nvmftestinit 00:16:13.711 00:00:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:13.711 00:00:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.711 00:00:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:13.711 00:00:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:13.711 00:00:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:13.711 00:00:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.711 00:00:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.711 00:00:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.711 00:00:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:13.711 00:00:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:13.711 00:00:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:13.711 00:00:43 -- common/autotest_common.sh@10 -- # set +x 00:16:20.320 00:00:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:20.320 00:00:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:20.320 00:00:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:20.320 00:00:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:20.320 00:00:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:20.320 00:00:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:20.320 00:00:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:20.320 00:00:50 -- nvmf/common.sh@295 -- # net_devs=() 00:16:20.320 00:00:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:20.320 00:00:50 -- nvmf/common.sh@296 -- # e810=() 00:16:20.320 00:00:50 -- nvmf/common.sh@296 -- # local -ga e810 00:16:20.320 00:00:50 -- nvmf/common.sh@297 -- # x722=() 00:16:20.320 00:00:50 -- nvmf/common.sh@297 -- # local -ga x722 00:16:20.320 00:00:50 -- nvmf/common.sh@298 -- # mlx=() 00:16:20.320 00:00:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:20.320 00:00:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.320 00:00:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.321 00:00:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.321 00:00:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.321 00:00:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.321 00:00:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.321 00:00:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.321 00:00:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.321 00:00:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.321 00:00:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.321 00:00:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.321 00:00:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:20.321 00:00:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:20.321 00:00:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:20.321 00:00:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.321 00:00:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:20.321 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:20.321 00:00:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.321 00:00:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:20.321 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:20.321 00:00:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:20.321 00:00:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.321 00:00:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.321 00:00:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:20.321 00:00:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.321 00:00:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:20.321 Found net devices under 0000:31:00.0: cvl_0_0 00:16:20.321 00:00:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.321 00:00:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.321 00:00:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.321 00:00:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:20.321 00:00:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.321 00:00:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:20.321 Found net devices under 0000:31:00.1: cvl_0_1 00:16:20.321 00:00:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.321 00:00:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:20.321 00:00:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:20.321 00:00:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:20.321 00:00:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:20.321 00:00:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.321 00:00:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.321 00:00:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.321 00:00:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:20.321 00:00:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.321 00:00:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.321 00:00:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:20.321 00:00:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.321 00:00:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.321 00:00:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:20.321 00:00:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:20.321 00:00:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.321 00:00:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.583 00:00:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.583 00:00:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.583 00:00:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:20.583 00:00:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.583 00:00:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.583 00:00:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.583 00:00:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:20.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:16:20.583 00:16:20.583 --- 10.0.0.2 ping statistics --- 00:16:20.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.583 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:16:20.583 00:00:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:16:20.583 00:16:20.583 --- 10.0.0.1 ping statistics --- 00:16:20.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.583 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:16:20.583 00:00:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.583 00:00:50 -- nvmf/common.sh@411 -- # return 0 00:16:20.583 00:00:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:20.583 00:00:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.583 00:00:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:20.583 00:00:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:20.583 00:00:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.583 00:00:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:20.583 00:00:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:20.844 00:00:50 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:20.844 00:00:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:20.844 00:00:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:20.844 00:00:50 -- common/autotest_common.sh@10 -- # set +x 00:16:20.844 00:00:50 -- nvmf/common.sh@470 -- # nvmfpid=378220 00:16:20.844 00:00:50 -- nvmf/common.sh@471 -- # waitforlisten 378220 00:16:20.844 00:00:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:20.844 00:00:50 -- common/autotest_common.sh@817 -- # '[' -z 378220 ']' 00:16:20.844 00:00:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.844 00:00:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:20.844 00:00:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.844 00:00:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:20.844 00:00:50 -- common/autotest_common.sh@10 -- # set +x 00:16:20.844 [2024-04-27 00:00:50.865447] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:16:20.844 [2024-04-27 00:00:50.865500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.844 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.844 [2024-04-27 00:00:50.932175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.844 [2024-04-27 00:00:50.998433] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.845 [2024-04-27 00:00:50.998469] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.845 [2024-04-27 00:00:50.998477] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.845 [2024-04-27 00:00:50.998483] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.845 [2024-04-27 00:00:50.998489] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.845 [2024-04-27 00:00:50.998596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.845 [2024-04-27 00:00:50.998729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.845 [2024-04-27 00:00:50.998886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.845 [2024-04-27 00:00:50.999065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.418 00:00:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:21.418 00:00:51 -- common/autotest_common.sh@850 -- # return 0 00:16:21.418 00:00:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:21.418 00:00:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:21.418 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.678 00:00:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.678 00:00:51 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.678 00:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.678 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.678 [2024-04-27 00:00:51.680403] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.678 00:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.678 00:00:51 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:21.678 00:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.678 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.678 Malloc0 00:16:21.678 00:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.678 00:00:51 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:21.678 00:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.678 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.678 00:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.678 00:00:51 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:21.678 00:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.678 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.678 00:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.678 00:00:51 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.678 00:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.678 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.678 [2024-04-27 00:00:51.739777] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.678 00:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.678 00:00:51 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:21.678 test case1: single bdev can't be used in multiple subsystems 00:16:21.678 00:00:51 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:21.678 00:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.678 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.678 00:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.678 00:00:51 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:21.678 00:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.678 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.678 00:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.678 00:00:51 -- target/nmic.sh@28 -- # nmic_status=0 00:16:21.678 00:00:51 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:21.678 00:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.678 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.678 [2024-04-27 00:00:51.775746] bdev.c:8011:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:21.678 [2024-04-27 00:00:51.775763] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:21.678 [2024-04-27 00:00:51.775770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.678 request: 00:16:21.678 { 00:16:21.678 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:21.678 "namespace": { 00:16:21.678 "bdev_name": "Malloc0", 00:16:21.678 "no_auto_visible": false 00:16:21.678 }, 00:16:21.678 "method": "nvmf_subsystem_add_ns", 00:16:21.678 "req_id": 1 00:16:21.678 } 00:16:21.678 Got JSON-RPC error response 00:16:21.678 response: 00:16:21.678 { 00:16:21.678 "code": -32602, 00:16:21.678 "message": "Invalid parameters" 00:16:21.678 } 00:16:21.678 00:00:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:21.678 00:00:51 -- target/nmic.sh@29 -- # nmic_status=1 00:16:21.678 00:00:51 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:21.678 00:00:51 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:21.678 Adding namespace failed - expected result. 00:16:21.678 00:00:51 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:21.678 test case2: host connect to nvmf target in multiple paths 00:16:21.678 00:00:51 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:21.678 00:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.678 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.678 [2024-04-27 00:00:51.787892] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:21.678 00:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.678 00:00:51 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:23.058 00:00:53 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:24.969 00:00:54 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:24.969 00:00:54 -- common/autotest_common.sh@1184 -- # local i=0 00:16:24.969 00:00:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.969 00:00:54 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:24.969 00:00:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:26.889 00:00:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:26.889 00:00:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:26.889 00:00:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.889 00:00:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:26.889 00:00:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.889 00:00:56 -- common/autotest_common.sh@1194 -- # return 0 00:16:26.889 00:00:56 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:26.889 [global] 00:16:26.889 thread=1 00:16:26.889 invalidate=1 00:16:26.889 rw=write 00:16:26.889 time_based=1 00:16:26.889 runtime=1 00:16:26.889 ioengine=libaio 00:16:26.889 direct=1 00:16:26.889 bs=4096 00:16:26.889 iodepth=1 00:16:26.889 norandommap=0 00:16:26.889 numjobs=1 00:16:26.889 00:16:26.889 verify_dump=1 00:16:26.889 verify_backlog=512 00:16:26.889 verify_state_save=0 00:16:26.889 do_verify=1 00:16:26.889 verify=crc32c-intel 00:16:26.889 [job0] 00:16:26.889 filename=/dev/nvme0n1 00:16:26.889 Could not set queue depth (nvme0n1) 00:16:27.150 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.150 fio-3.35 00:16:27.150 Starting 1 thread 00:16:28.106 00:16:28.106 job0: (groupid=0, jobs=1): err= 0: pid=379466: Sat Apr 27 00:00:58 2024 00:16:28.106 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:16:28.106 slat (nsec): min=10674, max=43532, avg=23647.11, stdev=7326.78 00:16:28.106 clat (usec): min=40903, max=41449, avg=40989.39, stdev=117.87 00:16:28.106 lat (usec): min=40933, max=41492, avg=41013.04, stdev=122.08 00:16:28.106 clat percentiles (usec): 00:16:28.106 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:28.106 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:28.106 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:28.106 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:28.106 | 99.99th=[41681] 00:16:28.106 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:16:28.106 slat (usec): min=9, max=26881, avg=82.06, stdev=1186.74 00:16:28.106 clat (usec): min=187, max=748, avg=499.86, stdev=111.77 00:16:28.106 lat (usec): min=197, max=27543, avg=581.92, stdev=1199.48 00:16:28.106 clat percentiles (usec): 00:16:28.106 | 1.00th=[ 245], 5.00th=[ 297], 10.00th=[ 359], 20.00th=[ 404], 00:16:28.106 | 30.00th=[ 453], 40.00th=[ 474], 50.00th=[ 494], 60.00th=[ 523], 00:16:28.106 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 668], 00:16:28.106 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 750], 99.95th=[ 750], 00:16:28.106 | 99.99th=[ 750] 00:16:28.106 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:28.106 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:28.106 lat (usec) : 250=1.32%, 500=49.43%, 750=45.85% 00:16:28.106 lat (msec) : 50=3.40% 00:16:28.106 cpu : usr=0.87%, sys=1.25%, ctx=533, majf=0, minf=1 00:16:28.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.106 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.106 00:16:28.106 Run status group 0 (all jobs): 00:16:28.106 READ: bw=69.2KiB/s (70.9kB/s), 69.2KiB/s-69.2KiB/s (70.9kB/s-70.9kB/s), io=72.0KiB (73.7kB), run=1040-1040msec 00:16:28.106 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:16:28.106 00:16:28.106 Disk stats (read/write): 00:16:28.106 nvme0n1: ios=39/512, merge=0/0, ticks=1541/233, in_queue=1774, util=98.70% 00:16:28.106 00:00:58 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:28.367 00:00:58 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:28.367 00:00:58 -- common/autotest_common.sh@1205 -- # local i=0 00:16:28.367 00:00:58 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:28.367 00:00:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.367 00:00:58 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.367 00:00:58 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:28.367 00:00:58 -- common/autotest_common.sh@1217 -- # return 0 00:16:28.367 00:00:58 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:28.367 00:00:58 -- target/nmic.sh@53 -- # nvmftestfini 00:16:28.367 00:00:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:28.367 00:00:58 -- nvmf/common.sh@117 -- # sync 00:16:28.367 00:00:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.367 00:00:58 -- nvmf/common.sh@120 -- # set +e 00:16:28.367 00:00:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.367 00:00:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.367 rmmod nvme_tcp 00:16:28.367 rmmod nvme_fabrics 00:16:28.367 rmmod nvme_keyring 00:16:28.367 00:00:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.367 00:00:58 -- nvmf/common.sh@124 -- # set -e 00:16:28.367 00:00:58 -- nvmf/common.sh@125 -- # return 0 00:16:28.367 00:00:58 -- nvmf/common.sh@478 -- # '[' -n 378220 ']' 00:16:28.367 00:00:58 -- nvmf/common.sh@479 -- # killprocess 378220 00:16:28.367 00:00:58 -- common/autotest_common.sh@936 -- # '[' -z 378220 ']' 00:16:28.367 00:00:58 -- common/autotest_common.sh@940 -- # kill -0 378220 00:16:28.367 00:00:58 -- common/autotest_common.sh@941 -- # uname 00:16:28.367 00:00:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:28.367 00:00:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 378220 00:16:28.628 00:00:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:28.628 00:00:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:28.628 00:00:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 378220' 00:16:28.628 killing process with pid 378220 00:16:28.628 00:00:58 -- common/autotest_common.sh@955 -- # kill 378220 00:16:28.628 00:00:58 -- common/autotest_common.sh@960 -- # wait 378220 00:16:28.628 00:00:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:28.628 00:00:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:28.628 00:00:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:28.628 00:00:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.628 00:00:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.628 00:00:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.628 00:00:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.628 00:00:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.180 00:01:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.180 00:16:31.180 real 0m17.246s 00:16:31.181 user 0m47.712s 00:16:31.181 sys 0m6.008s 00:16:31.181 00:01:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.181 00:01:00 -- common/autotest_common.sh@10 -- # set +x 00:16:31.181 ************************************ 00:16:31.181 END TEST nvmf_nmic 00:16:31.181 ************************************ 00:16:31.181 00:01:00 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:31.181 00:01:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:31.181 00:01:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.181 00:01:00 -- common/autotest_common.sh@10 -- # set +x 00:16:31.181 ************************************ 00:16:31.181 START TEST nvmf_fio_target 00:16:31.181 ************************************ 00:16:31.181 00:01:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:31.181 * Looking for test storage... 00:16:31.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.181 00:01:01 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.181 00:01:01 -- nvmf/common.sh@7 -- # uname -s 00:16:31.181 00:01:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.181 00:01:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.181 00:01:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.181 00:01:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.181 00:01:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.181 00:01:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.181 00:01:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.181 00:01:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.181 00:01:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.181 00:01:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.181 00:01:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.181 00:01:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.181 00:01:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.181 00:01:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.181 00:01:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.181 00:01:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.181 00:01:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.181 00:01:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.181 00:01:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.181 00:01:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.181 00:01:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.181 00:01:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.181 00:01:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.181 00:01:01 -- paths/export.sh@5 -- # export PATH 00:16:31.181 00:01:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.181 00:01:01 -- nvmf/common.sh@47 -- # : 0 00:16:31.181 00:01:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.181 00:01:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.181 00:01:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.181 00:01:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.181 00:01:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.181 00:01:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.181 00:01:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.181 00:01:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.181 00:01:01 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.181 00:01:01 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.181 00:01:01 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.181 00:01:01 -- target/fio.sh@16 -- # nvmftestinit 00:16:31.181 00:01:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:31.181 00:01:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.181 00:01:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:31.181 00:01:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:31.181 00:01:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:31.181 00:01:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.181 00:01:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.181 00:01:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.181 00:01:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:31.181 00:01:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:31.181 00:01:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.181 00:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:37.780 00:01:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:37.780 00:01:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:37.780 00:01:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:37.780 00:01:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:37.780 00:01:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:37.780 00:01:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:37.780 00:01:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:37.780 00:01:07 -- nvmf/common.sh@295 -- # net_devs=() 00:16:37.780 00:01:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:37.780 00:01:07 -- nvmf/common.sh@296 -- # e810=() 00:16:37.780 00:01:07 -- nvmf/common.sh@296 -- # local -ga e810 00:16:37.780 00:01:07 -- nvmf/common.sh@297 -- # x722=() 00:16:37.780 00:01:07 -- nvmf/common.sh@297 -- # local -ga x722 00:16:37.780 00:01:07 -- nvmf/common.sh@298 -- # mlx=() 00:16:37.780 00:01:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:37.780 00:01:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:37.780 00:01:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:37.780 00:01:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:37.780 00:01:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:37.780 00:01:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:37.780 00:01:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:37.780 00:01:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:37.780 00:01:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:37.780 00:01:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:37.780 00:01:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:37.780 00:01:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:37.780 00:01:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:37.780 00:01:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:37.780 00:01:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:37.780 00:01:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.780 00:01:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:37.780 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:37.780 00:01:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.780 00:01:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:37.780 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:37.780 00:01:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:37.780 00:01:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:37.780 00:01:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.780 00:01:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.780 00:01:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:37.780 00:01:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.780 00:01:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:37.780 Found net devices under 0000:31:00.0: cvl_0_0 00:16:37.780 00:01:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.780 00:01:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.780 00:01:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.780 00:01:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:37.780 00:01:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.781 00:01:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:37.781 Found net devices under 0000:31:00.1: cvl_0_1 00:16:37.781 00:01:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.781 00:01:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:37.781 00:01:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:37.781 00:01:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:37.781 00:01:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:37.781 00:01:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:37.781 00:01:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.781 00:01:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.781 00:01:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:37.781 00:01:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:37.781 00:01:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:37.781 00:01:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:37.781 00:01:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:37.781 00:01:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:37.781 00:01:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.781 00:01:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:37.781 00:01:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:37.781 00:01:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:37.781 00:01:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:37.781 00:01:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:37.781 00:01:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:37.781 00:01:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:37.781 00:01:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:37.781 00:01:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:37.781 00:01:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:37.781 00:01:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:37.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:16:37.781 00:16:37.781 --- 10.0.0.2 ping statistics --- 00:16:37.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.781 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:16:37.781 00:01:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:37.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:16:37.781 00:16:37.781 --- 10.0.0.1 ping statistics --- 00:16:37.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.781 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:16:37.781 00:01:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.781 00:01:07 -- nvmf/common.sh@411 -- # return 0 00:16:37.781 00:01:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:37.781 00:01:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.781 00:01:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:37.781 00:01:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:37.781 00:01:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.781 00:01:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:37.781 00:01:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:37.781 00:01:07 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:37.781 00:01:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:37.781 00:01:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:37.781 00:01:07 -- common/autotest_common.sh@10 -- # set +x 00:16:37.781 00:01:07 -- nvmf/common.sh@470 -- # nvmfpid=384095 00:16:37.781 00:01:07 -- nvmf/common.sh@471 -- # waitforlisten 384095 00:16:37.781 00:01:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:37.781 00:01:07 -- common/autotest_common.sh@817 -- # '[' -z 384095 ']' 00:16:37.781 00:01:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.781 00:01:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:37.781 00:01:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.781 00:01:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:37.781 00:01:07 -- common/autotest_common.sh@10 -- # set +x 00:16:37.781 [2024-04-27 00:01:07.823747] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:16:37.781 [2024-04-27 00:01:07.823794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.781 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.781 [2024-04-27 00:01:07.889342] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:37.781 [2024-04-27 00:01:07.953629] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.781 [2024-04-27 00:01:07.953666] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.781 [2024-04-27 00:01:07.953674] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.781 [2024-04-27 00:01:07.953680] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.781 [2024-04-27 00:01:07.953686] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.781 [2024-04-27 00:01:07.953799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.781 [2024-04-27 00:01:07.953934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.781 [2024-04-27 00:01:07.953955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.781 [2024-04-27 00:01:07.953957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.751 00:01:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:38.751 00:01:08 -- common/autotest_common.sh@850 -- # return 0 00:16:38.751 00:01:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:38.751 00:01:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:38.751 00:01:08 -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 00:01:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.751 00:01:08 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:38.751 [2024-04-27 00:01:08.771931] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.751 00:01:08 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:39.044 00:01:08 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:39.044 00:01:08 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:39.044 00:01:09 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:39.044 00:01:09 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:39.304 00:01:09 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:39.304 00:01:09 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:39.304 00:01:09 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:39.304 00:01:09 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:39.565 00:01:09 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:39.825 00:01:09 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:39.825 00:01:09 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:39.825 00:01:10 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:39.825 00:01:10 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:40.085 00:01:10 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:40.085 00:01:10 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:40.345 00:01:10 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:40.345 00:01:10 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:40.345 00:01:10 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.605 00:01:10 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:40.605 00:01:10 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.867 00:01:10 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.867 [2024-04-27 00:01:11.007005] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.867 00:01:11 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:41.129 00:01:11 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:41.390 00:01:11 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:42.776 00:01:12 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:42.776 00:01:12 -- common/autotest_common.sh@1184 -- # local i=0 00:16:42.776 00:01:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:42.776 00:01:12 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:16:42.776 00:01:12 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:16:42.776 00:01:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:45.323 00:01:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:45.323 00:01:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:45.323 00:01:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:45.323 00:01:14 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:45.323 00:01:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.323 00:01:14 -- common/autotest_common.sh@1194 -- # return 0 00:16:45.323 00:01:14 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:45.323 [global] 00:16:45.323 thread=1 00:16:45.323 invalidate=1 00:16:45.323 rw=write 00:16:45.323 time_based=1 00:16:45.323 runtime=1 00:16:45.323 ioengine=libaio 00:16:45.323 direct=1 00:16:45.323 bs=4096 00:16:45.323 iodepth=1 00:16:45.323 norandommap=0 00:16:45.323 numjobs=1 00:16:45.323 00:16:45.323 verify_dump=1 00:16:45.323 verify_backlog=512 00:16:45.323 verify_state_save=0 00:16:45.323 do_verify=1 00:16:45.323 verify=crc32c-intel 00:16:45.323 [job0] 00:16:45.323 filename=/dev/nvme0n1 00:16:45.323 [job1] 00:16:45.323 filename=/dev/nvme0n2 00:16:45.323 [job2] 00:16:45.323 filename=/dev/nvme0n3 00:16:45.323 [job3] 00:16:45.323 filename=/dev/nvme0n4 00:16:45.323 Could not set queue depth (nvme0n1) 00:16:45.323 Could not set queue depth (nvme0n2) 00:16:45.323 Could not set queue depth (nvme0n3) 00:16:45.323 Could not set queue depth (nvme0n4) 00:16:45.323 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:45.323 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:45.323 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:45.323 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:45.323 fio-3.35 00:16:45.323 Starting 4 threads 00:16:46.726 00:16:46.727 job0: (groupid=0, jobs=1): err= 0: pid=385762: Sat Apr 27 00:01:16 2024 00:16:46.727 read: IOPS=16, BW=65.8KiB/s (67.4kB/s)(68.0KiB/1033msec) 00:16:46.727 slat (nsec): min=10647, max=26677, avg=24776.00, stdev=3654.67 00:16:46.727 clat (usec): min=1145, max=43032, avg=39907.52, stdev=9999.75 00:16:46.727 lat (usec): min=1155, max=43058, avg=39932.30, stdev=10003.38 00:16:46.727 clat percentiles (usec): 00:16:46.727 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41681], 20.00th=[41681], 00:16:46.727 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:46.727 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:16:46.727 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:46.727 | 99.99th=[43254] 00:16:46.727 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:16:46.727 slat (usec): min=9, max=163, avg=31.04, stdev=12.13 00:16:46.727 clat (usec): min=393, max=1143, avg=653.97, stdev=108.66 00:16:46.727 lat (usec): min=404, max=1183, avg=685.02, stdev=113.46 00:16:46.727 clat percentiles (usec): 00:16:46.727 | 1.00th=[ 408], 5.00th=[ 474], 10.00th=[ 510], 20.00th=[ 553], 00:16:46.727 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 652], 60.00th=[ 685], 00:16:46.727 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 807], 00:16:46.727 | 99.00th=[ 898], 99.50th=[ 1037], 99.90th=[ 1139], 99.95th=[ 1139], 00:16:46.727 | 99.99th=[ 1139] 00:16:46.727 bw ( KiB/s): min= 4096, max= 4096, per=36.67%, avg=4096.00, stdev= 0.00, samples=1 00:16:46.727 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:46.727 lat (usec) : 500=7.56%, 750=73.53%, 1000=15.12% 00:16:46.727 lat (msec) : 2=0.76%, 50=3.02% 00:16:46.727 cpu : usr=0.48%, sys=1.74%, ctx=531, majf=0, minf=1 00:16:46.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:46.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.727 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:46.727 job1: (groupid=0, jobs=1): err= 0: pid=385763: Sat Apr 27 00:01:16 2024 00:16:46.727 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:46.727 slat (nsec): min=6882, max=60171, avg=25561.68, stdev=4095.34 00:16:46.727 clat (usec): min=612, max=1065, avg=895.49, stdev=80.68 00:16:46.727 lat (usec): min=638, max=1091, avg=921.05, stdev=81.31 00:16:46.727 clat percentiles (usec): 00:16:46.727 | 1.00th=[ 676], 5.00th=[ 742], 10.00th=[ 775], 20.00th=[ 832], 00:16:46.727 | 30.00th=[ 865], 40.00th=[ 889], 50.00th=[ 914], 60.00th=[ 930], 00:16:46.727 | 70.00th=[ 947], 80.00th=[ 963], 90.00th=[ 979], 95.00th=[ 996], 00:16:46.727 | 99.00th=[ 1029], 99.50th=[ 1057], 99.90th=[ 1074], 99.95th=[ 1074], 00:16:46.727 | 99.99th=[ 1074] 00:16:46.727 write: IOPS=1018, BW=4076KiB/s (4174kB/s)(4080KiB/1001msec); 0 zone resets 00:16:46.727 slat (nsec): min=9537, max=78779, avg=28172.31, stdev=10756.23 00:16:46.727 clat (usec): min=221, max=792, avg=479.20, stdev=82.63 00:16:46.727 lat (usec): min=231, max=823, avg=507.37, stdev=86.53 00:16:46.727 clat percentiles (usec): 00:16:46.727 | 1.00th=[ 293], 5.00th=[ 322], 10.00th=[ 371], 20.00th=[ 408], 00:16:46.727 | 30.00th=[ 449], 40.00th=[ 478], 50.00th=[ 490], 60.00th=[ 506], 00:16:46.727 | 70.00th=[ 519], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 594], 00:16:46.727 | 99.00th=[ 725], 99.50th=[ 750], 99.90th=[ 791], 99.95th=[ 791], 00:16:46.727 | 99.99th=[ 791] 00:16:46.727 bw ( KiB/s): min= 4096, max= 4096, per=36.67%, avg=4096.00, stdev= 0.00, samples=1 00:16:46.727 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:46.727 lat (usec) : 250=0.13%, 500=36.62%, 750=31.40%, 1000=30.35% 00:16:46.727 lat (msec) : 2=1.50% 00:16:46.727 cpu : usr=2.10%, sys=4.40%, ctx=1533, majf=0, minf=1 00:16:46.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:46.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.727 issued rwts: total=512,1020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:46.727 job2: (groupid=0, jobs=1): err= 0: pid=385764: Sat Apr 27 00:01:16 2024 00:16:46.727 read: IOPS=119, BW=479KiB/s (490kB/s)(484KiB/1011msec) 00:16:46.727 slat (nsec): min=7372, max=45179, avg=26584.13, stdev=5625.88 00:16:46.727 clat (usec): min=628, max=43073, avg=6135.77, stdev=13637.11 00:16:46.727 lat (usec): min=655, max=43100, avg=6162.36, stdev=13636.88 00:16:46.727 clat percentiles (usec): 00:16:46.727 | 1.00th=[ 635], 5.00th=[ 750], 10.00th=[ 816], 20.00th=[ 857], 00:16:46.727 | 30.00th=[ 873], 40.00th=[ 898], 50.00th=[ 922], 60.00th=[ 955], 00:16:46.727 | 70.00th=[ 996], 80.00th=[ 1106], 90.00th=[41681], 95.00th=[42206], 00:16:46.727 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:46.727 | 99.99th=[43254] 00:16:46.727 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:16:46.727 slat (nsec): min=9612, max=69865, avg=28333.20, stdev=11062.85 00:16:46.727 clat (usec): min=170, max=732, avg=481.58, stdev=82.05 00:16:46.727 lat (usec): min=182, max=743, avg=509.91, stdev=86.98 00:16:46.727 clat percentiles (usec): 00:16:46.727 | 1.00th=[ 293], 5.00th=[ 334], 10.00th=[ 371], 20.00th=[ 412], 00:16:46.727 | 30.00th=[ 441], 40.00th=[ 465], 50.00th=[ 494], 60.00th=[ 519], 00:16:46.727 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 570], 95.00th=[ 603], 00:16:46.727 | 99.00th=[ 660], 99.50th=[ 685], 99.90th=[ 734], 99.95th=[ 734], 00:16:46.727 | 99.99th=[ 734] 00:16:46.727 bw ( KiB/s): min= 4096, max= 4096, per=36.67%, avg=4096.00, stdev= 0.00, samples=1 00:16:46.727 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:46.727 lat (usec) : 250=0.47%, 500=41.71%, 750=39.65%, 1000=12.64% 00:16:46.727 lat (msec) : 2=3.00%, 20=0.16%, 50=2.37% 00:16:46.727 cpu : usr=1.09%, sys=1.49%, ctx=634, majf=0, minf=1 00:16:46.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:46.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.727 issued rwts: total=121,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:46.727 job3: (groupid=0, jobs=1): err= 0: pid=385765: Sat Apr 27 00:01:16 2024 00:16:46.727 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:46.727 slat (nsec): min=6951, max=61348, avg=27478.04, stdev=3535.87 00:16:46.727 clat (usec): min=574, max=1244, avg=1025.05, stdev=84.11 00:16:46.727 lat (usec): min=599, max=1271, avg=1052.53, stdev=84.54 00:16:46.727 clat percentiles (usec): 00:16:46.727 | 1.00th=[ 734], 5.00th=[ 873], 10.00th=[ 930], 20.00th=[ 979], 00:16:46.727 | 30.00th=[ 1004], 40.00th=[ 1020], 50.00th=[ 1029], 60.00th=[ 1057], 00:16:46.727 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1106], 95.00th=[ 1139], 00:16:46.727 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1237], 99.95th=[ 1237], 00:16:46.727 | 99.99th=[ 1237] 00:16:46.727 write: IOPS=840, BW=3361KiB/s (3441kB/s)(3364KiB/1001msec); 0 zone resets 00:16:46.727 slat (usec): min=6, max=1635, avg=31.07, stdev=66.76 00:16:46.727 clat (usec): min=131, max=892, avg=505.82, stdev=161.33 00:16:46.727 lat (usec): min=141, max=2051, avg=536.90, stdev=175.89 00:16:46.727 clat percentiles (usec): 00:16:46.727 | 1.00th=[ 147], 5.00th=[ 273], 10.00th=[ 314], 20.00th=[ 343], 00:16:46.727 | 30.00th=[ 379], 40.00th=[ 437], 50.00th=[ 515], 60.00th=[ 562], 00:16:46.727 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 725], 95.00th=[ 750], 00:16:46.727 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 889], 99.95th=[ 889], 00:16:46.727 | 99.99th=[ 889] 00:16:46.727 bw ( KiB/s): min= 4096, max= 4096, per=36.67%, avg=4096.00, stdev= 0.00, samples=1 00:16:46.727 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:46.727 lat (usec) : 250=2.44%, 500=27.05%, 750=29.86%, 1000=14.12% 00:16:46.727 lat (msec) : 2=26.53% 00:16:46.727 cpu : usr=2.20%, sys=4.90%, ctx=1356, majf=0, minf=1 00:16:46.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:46.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.727 issued rwts: total=512,841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:46.727 00:16:46.727 Run status group 0 (all jobs): 00:16:46.727 READ: bw=4500KiB/s (4608kB/s), 65.8KiB/s-2046KiB/s (67.4kB/s-2095kB/s), io=4648KiB (4760kB), run=1001-1033msec 00:16:46.727 WRITE: bw=10.9MiB/s (11.4MB/s), 1983KiB/s-4076KiB/s (2030kB/s-4174kB/s), io=11.3MiB (11.8MB), run=1001-1033msec 00:16:46.727 00:16:46.727 Disk stats (read/write): 00:16:46.727 nvme0n1: ios=58/512, merge=0/0, ticks=519/322, in_queue=841, util=87.17% 00:16:46.727 nvme0n2: ios=534/698, merge=0/0, ticks=1320/328, in_queue=1648, util=87.96% 00:16:46.727 nvme0n3: ios=163/512, merge=0/0, ticks=774/237, in_queue=1011, util=92.39% 00:16:46.727 nvme0n4: ios=537/512, merge=0/0, ticks=803/272, in_queue=1075, util=97.11% 00:16:46.727 00:01:16 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:46.727 [global] 00:16:46.727 thread=1 00:16:46.727 invalidate=1 00:16:46.727 rw=randwrite 00:16:46.727 time_based=1 00:16:46.727 runtime=1 00:16:46.727 ioengine=libaio 00:16:46.727 direct=1 00:16:46.727 bs=4096 00:16:46.727 iodepth=1 00:16:46.727 norandommap=0 00:16:46.727 numjobs=1 00:16:46.727 00:16:46.727 verify_dump=1 00:16:46.727 verify_backlog=512 00:16:46.728 verify_state_save=0 00:16:46.728 do_verify=1 00:16:46.728 verify=crc32c-intel 00:16:46.728 [job0] 00:16:46.728 filename=/dev/nvme0n1 00:16:46.728 [job1] 00:16:46.728 filename=/dev/nvme0n2 00:16:46.728 [job2] 00:16:46.728 filename=/dev/nvme0n3 00:16:46.728 [job3] 00:16:46.728 filename=/dev/nvme0n4 00:16:46.728 Could not set queue depth (nvme0n1) 00:16:46.728 Could not set queue depth (nvme0n2) 00:16:46.728 Could not set queue depth (nvme0n3) 00:16:46.728 Could not set queue depth (nvme0n4) 00:16:46.989 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:46.989 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:46.989 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:46.989 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:46.989 fio-3.35 00:16:46.989 Starting 4 threads 00:16:48.403 00:16:48.403 job0: (groupid=0, jobs=1): err= 0: pid=386281: Sat Apr 27 00:01:18 2024 00:16:48.403 read: IOPS=15, BW=61.8KiB/s (63.3kB/s)(64.0KiB/1035msec) 00:16:48.403 slat (nsec): min=25097, max=30256, avg=25862.31, stdev=1215.11 00:16:48.403 clat (usec): min=41825, max=42967, avg=42091.65, stdev=342.40 00:16:48.403 lat (usec): min=41851, max=42992, avg=42117.51, stdev=342.26 00:16:48.403 clat percentiles (usec): 00:16:48.403 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:48.403 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:48.403 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:16:48.403 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:48.403 | 99.99th=[42730] 00:16:48.403 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:16:48.403 slat (nsec): min=8602, max=50393, avg=27624.56, stdev=9046.30 00:16:48.403 clat (usec): min=319, max=1041, avg=669.99, stdev=116.76 00:16:48.403 lat (usec): min=330, max=1073, avg=697.61, stdev=121.11 00:16:48.403 clat percentiles (usec): 00:16:48.403 | 1.00th=[ 355], 5.00th=[ 445], 10.00th=[ 506], 20.00th=[ 570], 00:16:48.403 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 717], 00:16:48.403 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 799], 95.00th=[ 832], 00:16:48.403 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 1045], 99.95th=[ 1045], 00:16:48.403 | 99.99th=[ 1045] 00:16:48.403 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.403 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.403 lat (usec) : 500=9.09%, 750=61.55%, 1000=26.14% 00:16:48.403 lat (msec) : 2=0.19%, 50=3.03% 00:16:48.403 cpu : usr=1.45%, sys=1.35%, ctx=528, majf=0, minf=1 00:16:48.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.403 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.403 job1: (groupid=0, jobs=1): err= 0: pid=386282: Sat Apr 27 00:01:18 2024 00:16:48.403 read: IOPS=111, BW=448KiB/s (459kB/s)(460KiB/1027msec) 00:16:48.403 slat (nsec): min=9235, max=50714, avg=26592.93, stdev=4326.87 00:16:48.403 clat (usec): min=939, max=42994, avg=5777.63, stdev=13010.25 00:16:48.403 lat (usec): min=965, max=43020, avg=5804.22, stdev=13010.52 00:16:48.403 clat percentiles (usec): 00:16:48.403 | 1.00th=[ 1012], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1106], 00:16:48.403 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1188], 00:16:48.403 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[41681], 95.00th=[42206], 00:16:48.403 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:48.403 | 99.99th=[43254] 00:16:48.403 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:16:48.403 slat (nsec): min=8740, max=54541, avg=28248.17, stdev=9695.46 00:16:48.403 clat (usec): min=303, max=961, avg=663.89, stdev=117.01 00:16:48.403 lat (usec): min=313, max=993, avg=692.14, stdev=121.47 00:16:48.403 clat percentiles (usec): 00:16:48.403 | 1.00th=[ 388], 5.00th=[ 429], 10.00th=[ 498], 20.00th=[ 570], 00:16:48.403 | 30.00th=[ 611], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 701], 00:16:48.403 | 70.00th=[ 734], 80.00th=[ 766], 90.00th=[ 807], 95.00th=[ 832], 00:16:48.403 | 99.00th=[ 898], 99.50th=[ 906], 99.90th=[ 963], 99.95th=[ 963], 00:16:48.403 | 99.99th=[ 963] 00:16:48.403 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.403 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.403 lat (usec) : 500=8.29%, 750=53.59%, 1000=19.94% 00:16:48.403 lat (msec) : 2=16.11%, 50=2.07% 00:16:48.403 cpu : usr=1.17%, sys=2.24%, ctx=627, majf=0, minf=1 00:16:48.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.403 issued rwts: total=115,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.403 job2: (groupid=0, jobs=1): err= 0: pid=386284: Sat Apr 27 00:01:18 2024 00:16:48.403 read: IOPS=187, BW=748KiB/s (766kB/s)(776KiB/1037msec) 00:16:48.403 slat (nsec): min=24282, max=43166, avg=25199.46, stdev=2063.85 00:16:48.403 clat (usec): min=700, max=42556, avg=3444.89, stdev=9450.04 00:16:48.403 lat (usec): min=728, max=42581, avg=3470.09, stdev=9450.03 00:16:48.403 clat percentiles (usec): 00:16:48.403 | 1.00th=[ 914], 5.00th=[ 988], 10.00th=[ 1029], 20.00th=[ 1074], 00:16:48.403 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1172], 00:16:48.403 | 70.00th=[ 1188], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[41681], 00:16:48.403 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:48.403 | 99.99th=[42730] 00:16:48.403 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:16:48.403 slat (nsec): min=9311, max=49070, avg=28061.89, stdev=8041.56 00:16:48.403 clat (usec): min=295, max=984, avg=670.67, stdev=117.14 00:16:48.403 lat (usec): min=307, max=1015, avg=698.73, stdev=120.45 00:16:48.403 clat percentiles (usec): 00:16:48.403 | 1.00th=[ 400], 5.00th=[ 457], 10.00th=[ 506], 20.00th=[ 570], 00:16:48.403 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 717], 00:16:48.403 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 816], 95.00th=[ 840], 00:16:48.403 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 988], 99.95th=[ 988], 00:16:48.403 | 99.99th=[ 988] 00:16:48.403 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.403 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.403 lat (usec) : 500=6.23%, 750=47.03%, 1000=20.96% 00:16:48.403 lat (msec) : 2=24.22%, 50=1.56% 00:16:48.403 cpu : usr=1.06%, sys=1.83%, ctx=706, majf=0, minf=1 00:16:48.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.403 issued rwts: total=194,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.403 job3: (groupid=0, jobs=1): err= 0: pid=386285: Sat Apr 27 00:01:18 2024 00:16:48.403 read: IOPS=520, BW=2082KiB/s (2132kB/s)(2084KiB/1001msec) 00:16:48.403 slat (nsec): min=6996, max=59569, avg=24071.50, stdev=6069.20 00:16:48.403 clat (usec): min=406, max=1402, avg=872.39, stdev=96.78 00:16:48.403 lat (usec): min=431, max=1427, avg=896.46, stdev=97.33 00:16:48.403 clat percentiles (usec): 00:16:48.403 | 1.00th=[ 627], 5.00th=[ 742], 10.00th=[ 766], 20.00th=[ 824], 00:16:48.403 | 30.00th=[ 848], 40.00th=[ 857], 50.00th=[ 873], 60.00th=[ 881], 00:16:48.403 | 70.00th=[ 898], 80.00th=[ 914], 90.00th=[ 938], 95.00th=[ 1057], 00:16:48.403 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1401], 99.95th=[ 1401], 00:16:48.403 | 99.99th=[ 1401] 00:16:48.403 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:48.403 slat (nsec): min=9267, max=64462, avg=25666.88, stdev=9838.01 00:16:48.403 clat (usec): min=170, max=931, avg=484.42, stdev=79.24 00:16:48.403 lat (usec): min=197, max=962, avg=510.08, stdev=82.41 00:16:48.403 clat percentiles (usec): 00:16:48.403 | 1.00th=[ 302], 5.00th=[ 363], 10.00th=[ 383], 20.00th=[ 412], 00:16:48.403 | 30.00th=[ 457], 40.00th=[ 482], 50.00th=[ 498], 60.00th=[ 510], 00:16:48.403 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 578], 00:16:48.403 | 99.00th=[ 766], 99.50th=[ 816], 99.90th=[ 898], 99.95th=[ 930], 00:16:48.403 | 99.99th=[ 930] 00:16:48.404 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.404 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.404 lat (usec) : 250=0.13%, 500=34.43%, 750=33.14%, 1000=30.16% 00:16:48.404 lat (msec) : 2=2.14% 00:16:48.404 cpu : usr=2.30%, sys=3.80%, ctx=1546, majf=0, minf=1 00:16:48.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.404 issued rwts: total=521,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.404 00:16:48.404 Run status group 0 (all jobs): 00:16:48.404 READ: bw=3263KiB/s (3342kB/s), 61.8KiB/s-2082KiB/s (63.3kB/s-2132kB/s), io=3384KiB (3465kB), run=1001-1037msec 00:16:48.404 WRITE: bw=9875KiB/s (10.1MB/s), 1975KiB/s-4092KiB/s (2022kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1037msec 00:16:48.404 00:16:48.404 Disk stats (read/write): 00:16:48.404 nvme0n1: ios=61/512, merge=0/0, ticks=582/272, in_queue=854, util=92.48% 00:16:48.404 nvme0n2: ios=60/512, merge=0/0, ticks=485/278, in_queue=763, util=87.46% 00:16:48.404 nvme0n3: ios=177/512, merge=0/0, ticks=444/329, in_queue=773, util=88.50% 00:16:48.404 nvme0n4: ios=546/722, merge=0/0, ticks=505/343, in_queue=848, util=94.56% 00:16:48.404 00:01:18 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:48.404 [global] 00:16:48.404 thread=1 00:16:48.404 invalidate=1 00:16:48.404 rw=write 00:16:48.404 time_based=1 00:16:48.404 runtime=1 00:16:48.404 ioengine=libaio 00:16:48.404 direct=1 00:16:48.404 bs=4096 00:16:48.404 iodepth=128 00:16:48.404 norandommap=0 00:16:48.404 numjobs=1 00:16:48.404 00:16:48.404 verify_dump=1 00:16:48.404 verify_backlog=512 00:16:48.404 verify_state_save=0 00:16:48.404 do_verify=1 00:16:48.404 verify=crc32c-intel 00:16:48.404 [job0] 00:16:48.404 filename=/dev/nvme0n1 00:16:48.404 [job1] 00:16:48.404 filename=/dev/nvme0n2 00:16:48.404 [job2] 00:16:48.404 filename=/dev/nvme0n3 00:16:48.404 [job3] 00:16:48.404 filename=/dev/nvme0n4 00:16:48.404 Could not set queue depth (nvme0n1) 00:16:48.404 Could not set queue depth (nvme0n2) 00:16:48.404 Could not set queue depth (nvme0n3) 00:16:48.404 Could not set queue depth (nvme0n4) 00:16:48.668 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:48.668 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:48.668 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:48.668 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:48.668 fio-3.35 00:16:48.668 Starting 4 threads 00:16:50.081 00:16:50.081 job0: (groupid=0, jobs=1): err= 0: pid=386811: Sat Apr 27 00:01:19 2024 00:16:50.081 read: IOPS=9151, BW=35.7MiB/s (37.5MB/s)(36.0MiB/1007msec) 00:16:50.081 slat (nsec): min=960, max=7023.0k, avg=56723.59, stdev=424998.71 00:16:50.081 clat (usec): min=2685, max=14909, avg=7331.60, stdev=1682.93 00:16:50.081 lat (usec): min=2688, max=14912, avg=7388.32, stdev=1712.36 00:16:50.081 clat percentiles (usec): 00:16:50.081 | 1.00th=[ 3359], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6325], 00:16:50.081 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7177], 00:16:50.081 | 70.00th=[ 7439], 80.00th=[ 8356], 90.00th=[ 9896], 95.00th=[10814], 00:16:50.081 | 99.00th=[12518], 99.50th=[13435], 99.90th=[14484], 99.95th=[14877], 00:16:50.081 | 99.99th=[14877] 00:16:50.081 write: IOPS=9475, BW=37.0MiB/s (38.8MB/s)(37.3MiB/1007msec); 0 zone resets 00:16:50.081 slat (nsec): min=1638, max=6320.9k, avg=45159.38, stdev=264205.10 00:16:50.081 clat (usec): min=1670, max=14908, avg=6260.32, stdev=1342.52 00:16:50.081 lat (usec): min=1678, max=14910, avg=6305.48, stdev=1366.53 00:16:50.081 clat percentiles (usec): 00:16:50.081 | 1.00th=[ 2573], 5.00th=[ 3556], 10.00th=[ 4359], 20.00th=[ 5538], 00:16:50.081 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6587], 60.00th=[ 6718], 00:16:50.081 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7242], 95.00th=[ 7701], 00:16:50.081 | 99.00th=[10290], 99.50th=[11731], 99.90th=[12518], 99.95th=[12911], 00:16:50.081 | 99.99th=[14877] 00:16:50.081 bw ( KiB/s): min=37560, max=37760, per=38.71%, avg=37660.00, stdev=141.42, samples=2 00:16:50.081 iops : min= 9390, max= 9440, avg=9415.00, stdev=35.36, samples=2 00:16:50.081 lat (msec) : 2=0.08%, 4=4.38%, 10=90.20%, 20=5.35% 00:16:50.081 cpu : usr=6.86%, sys=7.75%, ctx=980, majf=0, minf=1 00:16:50.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.081 issued rwts: total=9216,9542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.081 job1: (groupid=0, jobs=1): err= 0: pid=386812: Sat Apr 27 00:01:19 2024 00:16:50.081 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:16:50.081 slat (nsec): min=873, max=24030k, avg=202318.88, stdev=1405832.16 00:16:50.081 clat (msec): min=2, max=112, avg=24.68, stdev=14.82 00:16:50.081 lat (msec): min=2, max=112, avg=24.88, stdev=14.95 00:16:50.081 clat percentiles (msec): 00:16:50.081 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 12], 20.00th=[ 16], 00:16:50.081 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 26], 00:16:50.081 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 40], 95.00th=[ 45], 00:16:50.081 | 99.00th=[ 97], 99.50th=[ 107], 99.90th=[ 112], 99.95th=[ 112], 00:16:50.081 | 99.99th=[ 112] 00:16:50.081 write: IOPS=2702, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1009msec); 0 zone resets 00:16:50.081 slat (nsec): min=1579, max=16296k, avg=170792.01, stdev=970603.73 00:16:50.081 clat (usec): min=1153, max=111980, avg=23775.69, stdev=18610.52 00:16:50.081 lat (usec): min=1161, max=111983, avg=23946.48, stdev=18688.19 00:16:50.081 clat percentiles (usec): 00:16:50.081 | 1.00th=[ 1745], 5.00th=[ 5473], 10.00th=[ 9110], 20.00th=[ 11338], 00:16:50.081 | 30.00th=[ 13173], 40.00th=[ 16188], 50.00th=[ 19268], 60.00th=[ 20579], 00:16:50.081 | 70.00th=[ 21890], 80.00th=[ 29492], 90.00th=[ 50070], 95.00th=[ 65274], 00:16:50.081 | 99.00th=[ 92799], 99.50th=[ 98042], 99.90th=[102237], 99.95th=[111674], 00:16:50.081 | 99.99th=[111674] 00:16:50.081 bw ( KiB/s): min= 8200, max=12592, per=10.69%, avg=10396.00, stdev=3105.61, samples=2 00:16:50.081 iops : min= 2050, max= 3148, avg=2599.00, stdev=776.40, samples=2 00:16:50.081 lat (msec) : 2=0.78%, 4=0.53%, 10=8.11%, 20=41.69%, 50=41.29% 00:16:50.081 lat (msec) : 100=7.04%, 250=0.57% 00:16:50.081 cpu : usr=1.88%, sys=3.08%, ctx=270, majf=0, minf=1 00:16:50.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.081 issued rwts: total=2560,2727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.081 job2: (groupid=0, jobs=1): err= 0: pid=386813: Sat Apr 27 00:01:19 2024 00:16:50.081 read: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec) 00:16:50.081 slat (nsec): min=933, max=7265.0k, avg=62605.56, stdev=446133.83 00:16:50.081 clat (usec): min=2321, max=15283, avg=8339.85, stdev=1916.12 00:16:50.081 lat (usec): min=2330, max=17231, avg=8402.46, stdev=1939.07 00:16:50.081 clat percentiles (usec): 00:16:50.082 | 1.00th=[ 4424], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 6915], 00:16:50.082 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8225], 00:16:50.082 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[11207], 95.00th=[12256], 00:16:50.082 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14746], 99.95th=[15270], 00:16:50.082 | 99.99th=[15270] 00:16:50.082 write: IOPS=8598, BW=33.6MiB/s (35.2MB/s)(33.7MiB/1004msec); 0 zone resets 00:16:50.082 slat (nsec): min=1681, max=5958.5k, avg=51054.82, stdev=296597.96 00:16:50.082 clat (usec): min=1207, max=15283, avg=6791.36, stdev=1680.79 00:16:50.082 lat (usec): min=1217, max=15285, avg=6842.41, stdev=1681.83 00:16:50.082 clat percentiles (usec): 00:16:50.082 | 1.00th=[ 2343], 5.00th=[ 3752], 10.00th=[ 4424], 20.00th=[ 5342], 00:16:50.082 | 30.00th=[ 6259], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7439], 00:16:50.082 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8455], 95.00th=[ 8717], 00:16:50.082 | 99.00th=[10945], 99.50th=[11076], 99.90th=[13566], 99.95th=[14484], 00:16:50.082 | 99.99th=[15270] 00:16:50.082 bw ( KiB/s): min=33584, max=34464, per=34.97%, avg=34024.00, stdev=622.25, samples=2 00:16:50.082 iops : min= 8396, max= 8616, avg=8506.00, stdev=155.56, samples=2 00:16:50.082 lat (msec) : 2=0.17%, 4=3.25%, 10=86.11%, 20=10.47% 00:16:50.082 cpu : usr=5.58%, sys=9.27%, ctx=716, majf=0, minf=1 00:16:50.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:50.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.082 issued rwts: total=8192,8633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.082 job3: (groupid=0, jobs=1): err= 0: pid=386814: Sat Apr 27 00:01:19 2024 00:16:50.082 read: IOPS=4392, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1049msec) 00:16:50.082 slat (nsec): min=930, max=15260k, avg=99577.62, stdev=874200.77 00:16:50.082 clat (usec): min=4252, max=57091, avg=16023.46, stdev=7789.52 00:16:50.082 lat (usec): min=4258, max=62724, avg=16123.03, stdev=7834.32 00:16:50.082 clat percentiles (usec): 00:16:50.082 | 1.00th=[ 5735], 5.00th=[10159], 10.00th=[10683], 20.00th=[11338], 00:16:50.082 | 30.00th=[11731], 40.00th=[11994], 50.00th=[13435], 60.00th=[15401], 00:16:50.082 | 70.00th=[17695], 80.00th=[19268], 90.00th=[21890], 95.00th=[25822], 00:16:50.082 | 99.00th=[52167], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:16:50.082 | 99.99th=[56886] 00:16:50.082 write: IOPS=4395, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1049msec); 0 zone resets 00:16:50.082 slat (nsec): min=1582, max=22041k, avg=90044.59, stdev=876469.73 00:16:50.082 clat (usec): min=772, max=51528, avg=12875.29, stdev=5912.84 00:16:50.082 lat (usec): min=804, max=51553, avg=12965.33, stdev=5997.49 00:16:50.082 clat percentiles (usec): 00:16:50.082 | 1.00th=[ 1582], 5.00th=[ 4146], 10.00th=[ 6194], 20.00th=[ 8717], 00:16:50.082 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11600], 60.00th=[13566], 00:16:50.082 | 70.00th=[15401], 80.00th=[17433], 90.00th=[20317], 95.00th=[22152], 00:16:50.082 | 99.00th=[35914], 99.50th=[37487], 99.90th=[43779], 99.95th=[45876], 00:16:50.082 | 99.99th=[51643] 00:16:50.082 bw ( KiB/s): min=16624, max=20240, per=18.95%, avg=18432.00, stdev=2556.90, samples=2 00:16:50.082 iops : min= 4156, max= 5060, avg=4608.00, stdev=639.22, samples=2 00:16:50.082 lat (usec) : 1000=0.05% 00:16:50.082 lat (msec) : 2=0.89%, 4=1.42%, 10=13.90%, 20=69.61%, 50=12.77% 00:16:50.082 lat (msec) : 100=1.37% 00:16:50.082 cpu : usr=3.34%, sys=4.96%, ctx=277, majf=0, minf=1 00:16:50.082 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:50.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.082 issued rwts: total=4608,4611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.082 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.082 00:16:50.082 Run status group 0 (all jobs): 00:16:50.082 READ: bw=91.5MiB/s (96.0MB/s), 9.91MiB/s-35.7MiB/s (10.4MB/s-37.5MB/s), io=96.0MiB (101MB), run=1004-1049msec 00:16:50.082 WRITE: bw=95.0MiB/s (99.6MB/s), 10.6MiB/s-37.0MiB/s (11.1MB/s-38.8MB/s), io=99.7MiB (105MB), run=1004-1049msec 00:16:50.082 00:16:50.082 Disk stats (read/write): 00:16:50.082 nvme0n1: ios=7714/7943, merge=0/0, ticks=53626/47143, in_queue=100769, util=99.10% 00:16:50.082 nvme0n2: ios=2088/2482, merge=0/0, ticks=49381/54241, in_queue=103622, util=97.04% 00:16:50.082 nvme0n3: ios=6801/7168, merge=0/0, ticks=54594/46160, in_queue=100754, util=99.79% 00:16:50.082 nvme0n4: ios=3584/3942, merge=0/0, ticks=53489/48276, in_queue=101765, util=89.55% 00:16:50.082 00:01:19 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:50.082 [global] 00:16:50.082 thread=1 00:16:50.082 invalidate=1 00:16:50.082 rw=randwrite 00:16:50.082 time_based=1 00:16:50.082 runtime=1 00:16:50.082 ioengine=libaio 00:16:50.082 direct=1 00:16:50.082 bs=4096 00:16:50.082 iodepth=128 00:16:50.082 norandommap=0 00:16:50.082 numjobs=1 00:16:50.082 00:16:50.082 verify_dump=1 00:16:50.082 verify_backlog=512 00:16:50.082 verify_state_save=0 00:16:50.082 do_verify=1 00:16:50.082 verify=crc32c-intel 00:16:50.082 [job0] 00:16:50.082 filename=/dev/nvme0n1 00:16:50.082 [job1] 00:16:50.082 filename=/dev/nvme0n2 00:16:50.082 [job2] 00:16:50.082 filename=/dev/nvme0n3 00:16:50.082 [job3] 00:16:50.082 filename=/dev/nvme0n4 00:16:50.082 Could not set queue depth (nvme0n1) 00:16:50.082 Could not set queue depth (nvme0n2) 00:16:50.082 Could not set queue depth (nvme0n3) 00:16:50.082 Could not set queue depth (nvme0n4) 00:16:50.344 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.344 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.344 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.344 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.344 fio-3.35 00:16:50.344 Starting 4 threads 00:16:51.754 00:16:51.754 job0: (groupid=0, jobs=1): err= 0: pid=387338: Sat Apr 27 00:01:21 2024 00:16:51.754 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:16:51.754 slat (nsec): min=832, max=23225k, avg=96256.31, stdev=723080.72 00:16:51.754 clat (usec): min=4657, max=68925, avg=11645.73, stdev=8654.88 00:16:51.754 lat (usec): min=4661, max=68932, avg=11741.99, stdev=8715.31 00:16:51.754 clat percentiles (usec): 00:16:51.754 | 1.00th=[ 5407], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7111], 00:16:51.754 | 30.00th=[ 7635], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10028], 00:16:51.754 | 70.00th=[10814], 80.00th=[12518], 90.00th=[17957], 95.00th=[25560], 00:16:51.754 | 99.00th=[58459], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:16:51.754 | 99.99th=[68682] 00:16:51.754 write: IOPS=5751, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1004msec); 0 zone resets 00:16:51.754 slat (nsec): min=1444, max=17111k, avg=75391.29, stdev=500074.06 00:16:51.754 clat (usec): min=3023, max=76601, avg=10643.08, stdev=6959.28 00:16:51.754 lat (usec): min=3360, max=76604, avg=10718.47, stdev=6988.01 00:16:51.754 clat percentiles (usec): 00:16:51.754 | 1.00th=[ 4621], 5.00th=[ 6456], 10.00th=[ 6718], 20.00th=[ 7111], 00:16:51.754 | 30.00th=[ 7308], 40.00th=[ 8356], 50.00th=[ 9110], 60.00th=[ 9634], 00:16:51.754 | 70.00th=[10028], 80.00th=[11207], 90.00th=[17171], 95.00th=[22414], 00:16:51.754 | 99.00th=[51119], 99.50th=[56361], 99.90th=[64226], 99.95th=[77071], 00:16:51.754 | 99.99th=[77071] 00:16:51.754 bw ( KiB/s): min=20480, max=24704, per=22.85%, avg=22592.00, stdev=2986.82, samples=2 00:16:51.754 iops : min= 5120, max= 6176, avg=5648.00, stdev=746.70, samples=2 00:16:51.754 lat (msec) : 4=0.15%, 10=65.40%, 20=28.17%, 50=5.12%, 100=1.17% 00:16:51.754 cpu : usr=2.79%, sys=4.89%, ctx=546, majf=0, minf=1 00:16:51.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:51.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:51.754 issued rwts: total=5632,5775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:51.754 job1: (groupid=0, jobs=1): err= 0: pid=387339: Sat Apr 27 00:01:21 2024 00:16:51.754 read: IOPS=6307, BW=24.6MiB/s (25.8MB/s)(24.8MiB/1007msec) 00:16:51.754 slat (nsec): min=867, max=10255k, avg=81218.46, stdev=558664.48 00:16:51.754 clat (usec): min=2468, max=20617, avg=10259.83, stdev=2624.54 00:16:51.754 lat (usec): min=2561, max=20621, avg=10341.05, stdev=2648.86 00:16:51.754 clat percentiles (usec): 00:16:51.754 | 1.00th=[ 4228], 5.00th=[ 6849], 10.00th=[ 7767], 20.00th=[ 8455], 00:16:51.754 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:16:51.754 | 70.00th=[10683], 80.00th=[12256], 90.00th=[13960], 95.00th=[15795], 00:16:51.754 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19530], 99.95th=[20579], 00:16:51.754 | 99.99th=[20579] 00:16:51.754 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:16:51.754 slat (nsec): min=1452, max=6879.6k, avg=68288.80, stdev=327231.14 00:16:51.754 clat (usec): min=2623, max=19351, avg=9359.05, stdev=2760.27 00:16:51.754 lat (usec): min=2630, max=19353, avg=9427.34, stdev=2770.53 00:16:51.754 clat percentiles (usec): 00:16:51.754 | 1.00th=[ 3556], 5.00th=[ 5211], 10.00th=[ 6390], 20.00th=[ 7635], 00:16:51.754 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:16:51.754 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[13173], 95.00th=[15401], 00:16:51.754 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:16:51.754 | 99.99th=[19268] 00:16:51.754 bw ( KiB/s): min=25552, max=27696, per=26.93%, avg=26624.00, stdev=1516.04, samples=2 00:16:51.754 iops : min= 6388, max= 6924, avg=6656.00, stdev=379.01, samples=2 00:16:51.754 lat (msec) : 4=1.17%, 10=62.29%, 20=36.49%, 50=0.05% 00:16:51.754 cpu : usr=4.17%, sys=5.27%, ctx=763, majf=0, minf=1 00:16:51.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:16:51.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:51.754 issued rwts: total=6352,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:51.754 job2: (groupid=0, jobs=1): err= 0: pid=387340: Sat Apr 27 00:01:21 2024 00:16:51.754 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:16:51.754 slat (nsec): min=898, max=9973.5k, avg=87758.34, stdev=656231.75 00:16:51.754 clat (usec): min=5206, max=34847, avg=12401.59, stdev=4393.74 00:16:51.754 lat (usec): min=5211, max=40002, avg=12489.34, stdev=4450.79 00:16:51.754 clat percentiles (usec): 00:16:51.754 | 1.00th=[ 6259], 5.00th=[ 7570], 10.00th=[ 7963], 20.00th=[ 9241], 00:16:51.754 | 30.00th=[ 9765], 40.00th=[11338], 50.00th=[11994], 60.00th=[12256], 00:16:51.754 | 70.00th=[12780], 80.00th=[14222], 90.00th=[19268], 95.00th=[21890], 00:16:51.754 | 99.00th=[28181], 99.50th=[33162], 99.90th=[34866], 99.95th=[34866], 00:16:51.754 | 99.99th=[34866] 00:16:51.754 write: IOPS=5269, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1004msec); 0 zone resets 00:16:51.754 slat (nsec): min=1546, max=10137k, avg=90391.85, stdev=553279.38 00:16:51.754 clat (usec): min=2556, max=45005, avg=12039.89, stdev=7206.71 00:16:51.754 lat (usec): min=2578, max=45013, avg=12130.29, stdev=7263.36 00:16:51.754 clat percentiles (usec): 00:16:51.754 | 1.00th=[ 4490], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 6718], 00:16:51.754 | 30.00th=[ 8160], 40.00th=[10159], 50.00th=[11338], 60.00th=[11731], 00:16:51.754 | 70.00th=[12387], 80.00th=[14615], 90.00th=[18482], 95.00th=[25560], 00:16:51.754 | 99.00th=[43254], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:16:51.754 | 99.99th=[44827] 00:16:51.754 bw ( KiB/s): min=18640, max=22832, per=20.97%, avg=20736.00, stdev=2964.19, samples=2 00:16:51.755 iops : min= 4660, max= 5708, avg=5184.00, stdev=741.05, samples=2 00:16:51.755 lat (msec) : 4=0.16%, 10=35.76%, 20=55.67%, 50=8.40% 00:16:51.755 cpu : usr=4.09%, sys=4.99%, ctx=393, majf=0, minf=2 00:16:51.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:51.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:51.755 issued rwts: total=5120,5291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:51.755 job3: (groupid=0, jobs=1): err= 0: pid=387341: Sat Apr 27 00:01:21 2024 00:16:51.755 read: IOPS=6873, BW=26.8MiB/s (28.2MB/s)(27.0MiB/1005msec) 00:16:51.755 slat (nsec): min=881, max=13033k, avg=78734.68, stdev=590328.98 00:16:51.755 clat (usec): min=2848, max=35281, avg=9829.36, stdev=4611.14 00:16:51.755 lat (usec): min=2853, max=35310, avg=9908.09, stdev=4654.38 00:16:51.755 clat percentiles (usec): 00:16:51.755 | 1.00th=[ 4621], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 7308], 00:16:51.755 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 9110], 00:16:51.755 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[13566], 95.00th=[20841], 00:16:51.755 | 99.00th=[29754], 99.50th=[30016], 99.90th=[30540], 99.95th=[31851], 00:16:51.755 | 99.99th=[35390] 00:16:51.755 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:16:51.755 slat (nsec): min=1487, max=11456k, avg=59244.26, stdev=320073.27 00:16:51.755 clat (usec): min=652, max=30443, avg=8313.93, stdev=3317.58 00:16:51.755 lat (usec): min=661, max=31430, avg=8373.18, stdev=3329.54 00:16:51.755 clat percentiles (usec): 00:16:51.755 | 1.00th=[ 2442], 5.00th=[ 4080], 10.00th=[ 4752], 20.00th=[ 6456], 00:16:51.755 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8094], 00:16:51.755 | 70.00th=[ 8356], 80.00th=[ 9241], 90.00th=[12649], 95.00th=[14746], 00:16:51.755 | 99.00th=[21365], 99.50th=[23725], 99.90th=[26870], 99.95th=[27919], 00:16:51.755 | 99.99th=[30540] 00:16:51.755 bw ( KiB/s): min=24576, max=32768, per=29.00%, avg=28672.00, stdev=5792.62, samples=2 00:16:51.755 iops : min= 6144, max= 8192, avg=7168.00, stdev=1448.15, samples=2 00:16:51.755 lat (usec) : 750=0.02% 00:16:51.755 lat (msec) : 2=0.33%, 4=2.47%, 10=75.04%, 20=18.64%, 50=3.50% 00:16:51.755 cpu : usr=4.38%, sys=5.88%, ctx=818, majf=0, minf=1 00:16:51.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:51.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:51.755 issued rwts: total=6908,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:51.755 00:16:51.755 Run status group 0 (all jobs): 00:16:51.755 READ: bw=93.1MiB/s (97.7MB/s), 19.9MiB/s-26.8MiB/s (20.9MB/s-28.2MB/s), io=93.8MiB (98.4MB), run=1004-1007msec 00:16:51.755 WRITE: bw=96.5MiB/s (101MB/s), 20.6MiB/s-27.9MiB/s (21.6MB/s-29.2MB/s), io=97.2MiB (102MB), run=1004-1007msec 00:16:51.755 00:16:51.755 Disk stats (read/write): 00:16:51.755 nvme0n1: ios=4177/4608, merge=0/0, ticks=21087/18462, in_queue=39549, util=93.49% 00:16:51.755 nvme0n2: ios=5160/5631, merge=0/0, ticks=41343/41140, in_queue=82483, util=93.79% 00:16:51.755 nvme0n3: ios=4113/4512, merge=0/0, ticks=30826/33130, in_queue=63956, util=92.16% 00:16:51.755 nvme0n4: ios=6360/6656, merge=0/0, ticks=48628/43572, in_queue=92200, util=91.33% 00:16:51.755 00:01:21 -- target/fio.sh@55 -- # sync 00:16:51.755 00:01:21 -- target/fio.sh@59 -- # fio_pid=387597 00:16:51.755 00:01:21 -- target/fio.sh@61 -- # sleep 3 00:16:51.755 00:01:21 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:51.755 [global] 00:16:51.755 thread=1 00:16:51.755 invalidate=1 00:16:51.755 rw=read 00:16:51.755 time_based=1 00:16:51.755 runtime=10 00:16:51.755 ioengine=libaio 00:16:51.755 direct=1 00:16:51.755 bs=4096 00:16:51.755 iodepth=1 00:16:51.755 norandommap=1 00:16:51.755 numjobs=1 00:16:51.755 00:16:51.755 [job0] 00:16:51.755 filename=/dev/nvme0n1 00:16:51.755 [job1] 00:16:51.755 filename=/dev/nvme0n2 00:16:51.755 [job2] 00:16:51.755 filename=/dev/nvme0n3 00:16:51.755 [job3] 00:16:51.755 filename=/dev/nvme0n4 00:16:51.755 Could not set queue depth (nvme0n1) 00:16:51.755 Could not set queue depth (nvme0n2) 00:16:51.755 Could not set queue depth (nvme0n3) 00:16:51.755 Could not set queue depth (nvme0n4) 00:16:52.023 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.023 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.023 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.023 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.023 fio-3.35 00:16:52.023 Starting 4 threads 00:16:54.565 00:01:24 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:54.826 00:01:24 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:54.826 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2277376, buflen=4096 00:16:54.826 fio: pid=387859, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:54.826 00:01:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:54.826 00:01:24 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:54.826 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=9588736, buflen=4096 00:16:54.826 fio: pid=387858, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:55.087 00:01:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:55.087 00:01:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:55.087 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=10051584, buflen=4096 00:16:55.087 fio: pid=387856, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:55.348 00:01:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:55.348 00:01:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:55.348 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=3854336, buflen=4096 00:16:55.348 fio: pid=387857, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:55.348 00:16:55.348 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=387856: Sat Apr 27 00:01:25 2024 00:16:55.348 read: IOPS=833, BW=3333KiB/s (3413kB/s)(9816KiB/2945msec) 00:16:55.348 slat (usec): min=7, max=25971, avg=47.89, stdev=713.22 00:16:55.348 clat (usec): min=428, max=8832, avg=1137.15, stdev=203.77 00:16:55.348 lat (usec): min=437, max=27188, avg=1177.75, stdev=648.28 00:16:55.348 clat percentiles (usec): 00:16:55.348 | 1.00th=[ 791], 5.00th=[ 955], 10.00th=[ 1020], 20.00th=[ 1074], 00:16:55.348 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1172], 00:16:55.348 | 70.00th=[ 1188], 80.00th=[ 1205], 90.00th=[ 1221], 95.00th=[ 1254], 00:16:55.348 | 99.00th=[ 1319], 99.50th=[ 1336], 99.90th=[ 1418], 99.95th=[ 5669], 00:16:55.348 | 99.99th=[ 8848] 00:16:55.348 bw ( KiB/s): min= 3376, max= 3448, per=42.19%, avg=3408.00, stdev=25.92, samples=5 00:16:55.348 iops : min= 844, max= 862, avg=852.00, stdev= 6.48, samples=5 00:16:55.348 lat (usec) : 500=0.08%, 750=0.45%, 1000=7.54% 00:16:55.348 lat (msec) : 2=91.81%, 10=0.08% 00:16:55.348 cpu : usr=0.78%, sys=2.38%, ctx=2459, majf=0, minf=1 00:16:55.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.348 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.348 issued rwts: total=2455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.348 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=387857: Sat Apr 27 00:01:25 2024 00:16:55.348 read: IOPS=302, BW=1208KiB/s (1237kB/s)(3764KiB/3116msec) 00:16:55.348 slat (usec): min=6, max=14837, avg=57.40, stdev=673.78 00:16:55.349 clat (usec): min=273, max=43014, avg=3224.59, stdev=9029.03 00:16:55.349 lat (usec): min=298, max=43040, avg=3282.00, stdev=9047.12 00:16:55.349 clat percentiles (usec): 00:16:55.349 | 1.00th=[ 603], 5.00th=[ 914], 10.00th=[ 996], 20.00th=[ 1057], 00:16:55.349 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:16:55.349 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[41157], 00:16:55.349 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:16:55.349 | 99.99th=[43254] 00:16:55.349 bw ( KiB/s): min= 952, max= 1416, per=15.03%, avg=1214.33, stdev=177.91, samples=6 00:16:55.349 iops : min= 238, max= 354, avg=303.50, stdev=44.39, samples=6 00:16:55.349 lat (usec) : 500=0.42%, 750=1.59%, 1000=8.92% 00:16:55.349 lat (msec) : 2=83.76%, 50=5.20% 00:16:55.349 cpu : usr=0.48%, sys=1.19%, ctx=946, majf=0, minf=1 00:16:55.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.349 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.349 issued rwts: total=942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.349 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=387858: Sat Apr 27 00:01:25 2024 00:16:55.349 read: IOPS=843, BW=3371KiB/s (3452kB/s)(9364KiB/2778msec) 00:16:55.349 slat (nsec): min=7096, max=64681, avg=26354.66, stdev=3505.42 00:16:55.349 clat (usec): min=666, max=41966, avg=1144.80, stdev=1310.01 00:16:55.349 lat (usec): min=693, max=41993, avg=1171.16, stdev=1309.97 00:16:55.349 clat percentiles (usec): 00:16:55.349 | 1.00th=[ 824], 5.00th=[ 930], 10.00th=[ 996], 20.00th=[ 1045], 00:16:55.349 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:16:55.349 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:16:55.349 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[27919], 99.95th=[41157], 00:16:55.349 | 99.99th=[42206] 00:16:55.349 bw ( KiB/s): min= 3488, max= 3600, per=43.85%, avg=3542.40, stdev=46.10, samples=5 00:16:55.349 iops : min= 872, max= 900, avg=885.60, stdev=11.52, samples=5 00:16:55.349 lat (usec) : 750=0.34%, 1000=10.59% 00:16:55.349 lat (msec) : 2=88.86%, 10=0.04%, 50=0.13% 00:16:55.349 cpu : usr=1.40%, sys=3.46%, ctx=2342, majf=0, minf=1 00:16:55.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.349 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.349 issued rwts: total=2342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.349 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=387859: Sat Apr 27 00:01:25 2024 00:16:55.349 read: IOPS=213, BW=854KiB/s (875kB/s)(2224KiB/2603msec) 00:16:55.349 slat (nsec): min=6623, max=61385, avg=24894.98, stdev=5371.22 00:16:55.349 clat (usec): min=338, max=44957, avg=4613.53, stdev=11419.46 00:16:55.349 lat (usec): min=345, max=44987, avg=4638.41, stdev=11420.50 00:16:55.349 clat percentiles (usec): 00:16:55.349 | 1.00th=[ 537], 5.00th=[ 775], 10.00th=[ 889], 20.00th=[ 988], 00:16:55.349 | 30.00th=[ 1074], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:16:55.349 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1319], 95.00th=[41681], 00:16:55.349 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:16:55.349 | 99.99th=[44827] 00:16:55.349 bw ( KiB/s): min= 96, max= 1400, per=10.97%, avg=886.40, stdev=539.70, samples=5 00:16:55.349 iops : min= 24, max= 350, avg=221.60, stdev=134.93, samples=5 00:16:55.349 lat (usec) : 500=0.72%, 750=3.77%, 1000=17.41% 00:16:55.349 lat (msec) : 2=69.12%, 10=0.18%, 50=8.62% 00:16:55.349 cpu : usr=0.35%, sys=0.73%, ctx=558, majf=0, minf=2 00:16:55.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.349 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.349 issued rwts: total=557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.349 00:16:55.349 Run status group 0 (all jobs): 00:16:55.349 READ: bw=8077KiB/s (8271kB/s), 854KiB/s-3371KiB/s (875kB/s-3452kB/s), io=24.6MiB (25.8MB), run=2603-3116msec 00:16:55.349 00:16:55.349 Disk stats (read/write): 00:16:55.349 nvme0n1: ios=2398/0, merge=0/0, ticks=2627/0, in_queue=2627, util=93.46% 00:16:55.349 nvme0n2: ios=940/0, merge=0/0, ticks=2904/0, in_queue=2904, util=94.83% 00:16:55.349 nvme0n3: ios=2286/0, merge=0/0, ticks=2258/0, in_queue=2258, util=96.00% 00:16:55.349 nvme0n4: ios=556/0, merge=0/0, ticks=2505/0, in_queue=2505, util=96.42% 00:16:55.349 00:01:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:55.349 00:01:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:55.609 00:01:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:55.609 00:01:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:55.869 00:01:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:55.869 00:01:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:55.869 00:01:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:55.869 00:01:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:56.130 00:01:26 -- target/fio.sh@69 -- # fio_status=0 00:16:56.130 00:01:26 -- target/fio.sh@70 -- # wait 387597 00:16:56.130 00:01:26 -- target/fio.sh@70 -- # fio_status=4 00:16:56.130 00:01:26 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.130 00:01:26 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.130 00:01:26 -- common/autotest_common.sh@1205 -- # local i=0 00:16:56.130 00:01:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:56.130 00:01:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.130 00:01:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:56.130 00:01:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.130 00:01:26 -- common/autotest_common.sh@1217 -- # return 0 00:16:56.130 00:01:26 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:56.130 00:01:26 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:56.130 nvmf hotplug test: fio failed as expected 00:16:56.130 00:01:26 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.390 00:01:26 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:56.390 00:01:26 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:56.390 00:01:26 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:56.390 00:01:26 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:56.390 00:01:26 -- target/fio.sh@91 -- # nvmftestfini 00:16:56.390 00:01:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:56.390 00:01:26 -- nvmf/common.sh@117 -- # sync 00:16:56.390 00:01:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.390 00:01:26 -- nvmf/common.sh@120 -- # set +e 00:16:56.390 00:01:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.390 00:01:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.390 rmmod nvme_tcp 00:16:56.390 rmmod nvme_fabrics 00:16:56.390 rmmod nvme_keyring 00:16:56.390 00:01:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.390 00:01:26 -- nvmf/common.sh@124 -- # set -e 00:16:56.390 00:01:26 -- nvmf/common.sh@125 -- # return 0 00:16:56.390 00:01:26 -- nvmf/common.sh@478 -- # '[' -n 384095 ']' 00:16:56.390 00:01:26 -- nvmf/common.sh@479 -- # killprocess 384095 00:16:56.390 00:01:26 -- common/autotest_common.sh@936 -- # '[' -z 384095 ']' 00:16:56.390 00:01:26 -- common/autotest_common.sh@940 -- # kill -0 384095 00:16:56.390 00:01:26 -- common/autotest_common.sh@941 -- # uname 00:16:56.390 00:01:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.390 00:01:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 384095 00:16:56.390 00:01:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:56.390 00:01:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:56.390 00:01:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 384095' 00:16:56.390 killing process with pid 384095 00:16:56.390 00:01:26 -- common/autotest_common.sh@955 -- # kill 384095 00:16:56.390 00:01:26 -- common/autotest_common.sh@960 -- # wait 384095 00:16:56.650 00:01:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:56.650 00:01:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:56.650 00:01:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:56.650 00:01:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.650 00:01:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.650 00:01:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.650 00:01:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.650 00:01:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.190 00:01:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:59.190 00:16:59.190 real 0m27.791s 00:16:59.190 user 2m32.038s 00:16:59.190 sys 0m8.724s 00:16:59.190 00:01:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:59.190 00:01:28 -- common/autotest_common.sh@10 -- # set +x 00:16:59.190 ************************************ 00:16:59.190 END TEST nvmf_fio_target 00:16:59.190 ************************************ 00:16:59.190 00:01:28 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:59.190 00:01:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:59.190 00:01:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:59.190 00:01:28 -- common/autotest_common.sh@10 -- # set +x 00:16:59.190 ************************************ 00:16:59.190 START TEST nvmf_bdevio 00:16:59.190 ************************************ 00:16:59.190 00:01:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:59.190 * Looking for test storage... 00:16:59.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.190 00:01:29 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.190 00:01:29 -- nvmf/common.sh@7 -- # uname -s 00:16:59.190 00:01:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.190 00:01:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.190 00:01:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.190 00:01:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.190 00:01:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.190 00:01:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.190 00:01:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.190 00:01:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.190 00:01:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.190 00:01:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.190 00:01:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.191 00:01:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.191 00:01:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.191 00:01:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.191 00:01:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.191 00:01:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.191 00:01:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.191 00:01:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.191 00:01:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.191 00:01:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.191 00:01:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.191 00:01:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.191 00:01:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.191 00:01:29 -- paths/export.sh@5 -- # export PATH 00:16:59.191 00:01:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.191 00:01:29 -- nvmf/common.sh@47 -- # : 0 00:16:59.191 00:01:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:59.191 00:01:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:59.191 00:01:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.191 00:01:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.191 00:01:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.191 00:01:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:59.191 00:01:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:59.191 00:01:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:59.191 00:01:29 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.191 00:01:29 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.191 00:01:29 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:59.191 00:01:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:59.191 00:01:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.191 00:01:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:59.191 00:01:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:59.191 00:01:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:59.191 00:01:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.191 00:01:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.191 00:01:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.191 00:01:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:59.191 00:01:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:59.191 00:01:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:59.191 00:01:29 -- common/autotest_common.sh@10 -- # set +x 00:17:05.778 00:01:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:05.778 00:01:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:05.778 00:01:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:05.778 00:01:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:05.779 00:01:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:05.779 00:01:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:05.779 00:01:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:05.779 00:01:35 -- nvmf/common.sh@295 -- # net_devs=() 00:17:05.779 00:01:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:05.779 00:01:35 -- nvmf/common.sh@296 -- # e810=() 00:17:05.779 00:01:35 -- nvmf/common.sh@296 -- # local -ga e810 00:17:05.779 00:01:35 -- nvmf/common.sh@297 -- # x722=() 00:17:05.779 00:01:35 -- nvmf/common.sh@297 -- # local -ga x722 00:17:05.779 00:01:35 -- nvmf/common.sh@298 -- # mlx=() 00:17:05.779 00:01:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:05.779 00:01:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.779 00:01:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.779 00:01:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.779 00:01:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.779 00:01:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.779 00:01:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.779 00:01:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.779 00:01:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.779 00:01:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.779 00:01:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.779 00:01:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.779 00:01:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:05.779 00:01:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:05.779 00:01:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:05.779 00:01:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.779 00:01:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:05.779 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:05.779 00:01:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.779 00:01:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:05.779 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:05.779 00:01:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:05.779 00:01:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.779 00:01:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.779 00:01:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:05.779 00:01:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.779 00:01:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:05.779 Found net devices under 0000:31:00.0: cvl_0_0 00:17:05.779 00:01:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.779 00:01:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.779 00:01:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.779 00:01:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:05.779 00:01:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.779 00:01:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:05.779 Found net devices under 0000:31:00.1: cvl_0_1 00:17:05.779 00:01:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.779 00:01:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:05.779 00:01:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:05.779 00:01:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:05.779 00:01:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:05.779 00:01:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.779 00:01:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.779 00:01:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.779 00:01:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:05.779 00:01:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.779 00:01:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.779 00:01:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:05.779 00:01:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.779 00:01:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.779 00:01:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:05.779 00:01:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:05.779 00:01:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.779 00:01:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.040 00:01:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.040 00:01:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.040 00:01:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:06.040 00:01:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.040 00:01:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.040 00:01:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.040 00:01:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:06.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:17:06.040 00:17:06.040 --- 10.0.0.2 ping statistics --- 00:17:06.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.040 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:17:06.040 00:01:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:17:06.040 00:17:06.040 --- 10.0.0.1 ping statistics --- 00:17:06.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.040 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:17:06.040 00:01:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.040 00:01:36 -- nvmf/common.sh@411 -- # return 0 00:17:06.040 00:01:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:06.040 00:01:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.040 00:01:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:06.040 00:01:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:06.040 00:01:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.040 00:01:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:06.040 00:01:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:06.040 00:01:36 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:06.040 00:01:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:06.040 00:01:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:06.040 00:01:36 -- common/autotest_common.sh@10 -- # set +x 00:17:06.302 00:01:36 -- nvmf/common.sh@470 -- # nvmfpid=392954 00:17:06.302 00:01:36 -- nvmf/common.sh@471 -- # waitforlisten 392954 00:17:06.302 00:01:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:06.302 00:01:36 -- common/autotest_common.sh@817 -- # '[' -z 392954 ']' 00:17:06.302 00:01:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.302 00:01:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:06.302 00:01:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.302 00:01:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:06.302 00:01:36 -- common/autotest_common.sh@10 -- # set +x 00:17:06.302 [2024-04-27 00:01:36.323667] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:17:06.302 [2024-04-27 00:01:36.323715] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.302 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.302 [2024-04-27 00:01:36.406652] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.302 [2024-04-27 00:01:36.475329] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.302 [2024-04-27 00:01:36.475374] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.302 [2024-04-27 00:01:36.475382] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.302 [2024-04-27 00:01:36.475389] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.302 [2024-04-27 00:01:36.475395] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.302 [2024-04-27 00:01:36.475543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:06.302 [2024-04-27 00:01:36.475692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:06.302 [2024-04-27 00:01:36.475866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:06.302 [2024-04-27 00:01:36.475869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.246 00:01:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:07.246 00:01:37 -- common/autotest_common.sh@850 -- # return 0 00:17:07.246 00:01:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:07.246 00:01:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:07.246 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:17:07.246 00:01:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.246 00:01:37 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.246 00:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.246 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:17:07.246 [2024-04-27 00:01:37.146122] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.246 00:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.246 00:01:37 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:07.246 00:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.246 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:17:07.246 Malloc0 00:17:07.246 00:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.246 00:01:37 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.246 00:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.246 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:17:07.246 00:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.246 00:01:37 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.246 00:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.246 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:17:07.246 00:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.246 00:01:37 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.246 00:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.246 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:17:07.246 [2024-04-27 00:01:37.211292] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.246 00:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.246 00:01:37 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:07.246 00:01:37 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:07.246 00:01:37 -- nvmf/common.sh@521 -- # config=() 00:17:07.246 00:01:37 -- nvmf/common.sh@521 -- # local subsystem config 00:17:07.246 00:01:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.246 00:01:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.246 { 00:17:07.246 "params": { 00:17:07.246 "name": "Nvme$subsystem", 00:17:07.246 "trtype": "$TEST_TRANSPORT", 00:17:07.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.246 "adrfam": "ipv4", 00:17:07.246 "trsvcid": "$NVMF_PORT", 00:17:07.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.246 "hdgst": ${hdgst:-false}, 00:17:07.246 "ddgst": ${ddgst:-false} 00:17:07.246 }, 00:17:07.246 "method": "bdev_nvme_attach_controller" 00:17:07.246 } 00:17:07.246 EOF 00:17:07.246 )") 00:17:07.246 00:01:37 -- nvmf/common.sh@543 -- # cat 00:17:07.246 00:01:37 -- nvmf/common.sh@545 -- # jq . 00:17:07.246 00:01:37 -- nvmf/common.sh@546 -- # IFS=, 00:17:07.246 00:01:37 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:07.246 "params": { 00:17:07.246 "name": "Nvme1", 00:17:07.246 "trtype": "tcp", 00:17:07.246 "traddr": "10.0.0.2", 00:17:07.246 "adrfam": "ipv4", 00:17:07.246 "trsvcid": "4420", 00:17:07.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.246 "hdgst": false, 00:17:07.246 "ddgst": false 00:17:07.246 }, 00:17:07.246 "method": "bdev_nvme_attach_controller" 00:17:07.246 }' 00:17:07.246 [2024-04-27 00:01:37.267424] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:17:07.246 [2024-04-27 00:01:37.267495] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393189 ] 00:17:07.246 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.246 [2024-04-27 00:01:37.333117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:07.246 [2024-04-27 00:01:37.409794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.246 [2024-04-27 00:01:37.409933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.246 [2024-04-27 00:01:37.410098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.507 I/O targets: 00:17:07.507 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:07.507 00:17:07.507 00:17:07.507 CUnit - A unit testing framework for C - Version 2.1-3 00:17:07.507 http://cunit.sourceforge.net/ 00:17:07.507 00:17:07.507 00:17:07.507 Suite: bdevio tests on: Nvme1n1 00:17:07.507 Test: blockdev write read block ...passed 00:17:07.507 Test: blockdev write zeroes read block ...passed 00:17:07.507 Test: blockdev write zeroes read no split ...passed 00:17:07.507 Test: blockdev write zeroes read split ...passed 00:17:07.507 Test: blockdev write zeroes read split partial ...passed 00:17:07.507 Test: blockdev reset ...[2024-04-27 00:01:37.707198] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:07.507 [2024-04-27 00:01:37.707269] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197f900 (9): Bad file descriptor 00:17:07.507 [2024-04-27 00:01:37.721106] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:07.507 passed 00:17:07.507 Test: blockdev write read 8 blocks ...passed 00:17:07.507 Test: blockdev write read size > 128k ...passed 00:17:07.507 Test: blockdev write read invalid size ...passed 00:17:07.769 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:07.769 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:07.769 Test: blockdev write read max offset ...passed 00:17:07.769 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:07.769 Test: blockdev writev readv 8 blocks ...passed 00:17:07.769 Test: blockdev writev readv 30 x 1block ...passed 00:17:07.769 Test: blockdev writev readv block ...passed 00:17:07.769 Test: blockdev writev readv size > 128k ...passed 00:17:07.769 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:07.769 Test: blockdev comparev and writev ...[2024-04-27 00:01:37.945373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.769 [2024-04-27 00:01:37.945397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.769 [2024-04-27 00:01:37.945408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.769 [2024-04-27 00:01:37.945414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.769 [2024-04-27 00:01:37.945915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.769 [2024-04-27 00:01:37.945924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:07.769 [2024-04-27 00:01:37.945934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.769 [2024-04-27 00:01:37.945940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:07.769 [2024-04-27 00:01:37.946450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.769 [2024-04-27 00:01:37.946457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:07.769 [2024-04-27 00:01:37.946467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.769 [2024-04-27 00:01:37.946472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:07.769 [2024-04-27 00:01:37.947007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.769 [2024-04-27 00:01:37.947014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:07.769 [2024-04-27 00:01:37.947023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.769 [2024-04-27 00:01:37.947028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:08.030 passed 00:17:08.030 Test: blockdev nvme passthru rw ...passed 00:17:08.030 Test: blockdev nvme passthru vendor specific ...[2024-04-27 00:01:38.031595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.030 [2024-04-27 00:01:38.031604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:08.030 [2024-04-27 00:01:38.032028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.030 [2024-04-27 00:01:38.032036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:08.030 [2024-04-27 00:01:38.032418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.030 [2024-04-27 00:01:38.032425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:08.030 [2024-04-27 00:01:38.032789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.030 [2024-04-27 00:01:38.032796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:08.030 passed 00:17:08.030 Test: blockdev nvme admin passthru ...passed 00:17:08.030 Test: blockdev copy ...passed 00:17:08.030 00:17:08.030 Run Summary: Type Total Ran Passed Failed Inactive 00:17:08.030 suites 1 1 n/a 0 0 00:17:08.030 tests 23 23 23 0 0 00:17:08.030 asserts 152 152 152 0 n/a 00:17:08.030 00:17:08.030 Elapsed time = 1.081 seconds 00:17:08.030 00:01:38 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.030 00:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.030 00:01:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.030 00:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.030 00:01:38 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:08.030 00:01:38 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:08.030 00:01:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:08.030 00:01:38 -- nvmf/common.sh@117 -- # sync 00:17:08.030 00:01:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.030 00:01:38 -- nvmf/common.sh@120 -- # set +e 00:17:08.030 00:01:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.030 00:01:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.030 rmmod nvme_tcp 00:17:08.291 rmmod nvme_fabrics 00:17:08.291 rmmod nvme_keyring 00:17:08.291 00:01:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.291 00:01:38 -- nvmf/common.sh@124 -- # set -e 00:17:08.291 00:01:38 -- nvmf/common.sh@125 -- # return 0 00:17:08.291 00:01:38 -- nvmf/common.sh@478 -- # '[' -n 392954 ']' 00:17:08.291 00:01:38 -- nvmf/common.sh@479 -- # killprocess 392954 00:17:08.291 00:01:38 -- common/autotest_common.sh@936 -- # '[' -z 392954 ']' 00:17:08.291 00:01:38 -- common/autotest_common.sh@940 -- # kill -0 392954 00:17:08.291 00:01:38 -- common/autotest_common.sh@941 -- # uname 00:17:08.291 00:01:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:08.291 00:01:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 392954 00:17:08.291 00:01:38 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:08.291 00:01:38 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:08.291 00:01:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 392954' 00:17:08.291 killing process with pid 392954 00:17:08.291 00:01:38 -- common/autotest_common.sh@955 -- # kill 392954 00:17:08.291 00:01:38 -- common/autotest_common.sh@960 -- # wait 392954 00:17:08.552 00:01:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:08.552 00:01:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:08.552 00:01:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:08.552 00:01:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.552 00:01:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.552 00:01:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.552 00:01:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.552 00:01:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.463 00:01:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:10.463 00:17:10.463 real 0m11.602s 00:17:10.463 user 0m12.121s 00:17:10.463 sys 0m5.724s 00:17:10.463 00:01:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:10.463 00:01:40 -- common/autotest_common.sh@10 -- # set +x 00:17:10.463 ************************************ 00:17:10.463 END TEST nvmf_bdevio 00:17:10.463 ************************************ 00:17:10.463 00:01:40 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:17:10.463 00:01:40 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:10.463 00:01:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:10.463 00:01:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.463 00:01:40 -- common/autotest_common.sh@10 -- # set +x 00:17:10.723 ************************************ 00:17:10.723 START TEST nvmf_bdevio_no_huge 00:17:10.723 ************************************ 00:17:10.723 00:01:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:10.723 * Looking for test storage... 00:17:10.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.723 00:01:40 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.723 00:01:40 -- nvmf/common.sh@7 -- # uname -s 00:17:10.723 00:01:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.723 00:01:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.723 00:01:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.723 00:01:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.723 00:01:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.723 00:01:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.723 00:01:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.723 00:01:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.723 00:01:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.723 00:01:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.723 00:01:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:10.723 00:01:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:10.723 00:01:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.723 00:01:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.723 00:01:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.723 00:01:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.723 00:01:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.723 00:01:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.723 00:01:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.723 00:01:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.723 00:01:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.723 00:01:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.723 00:01:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.723 00:01:40 -- paths/export.sh@5 -- # export PATH 00:17:10.723 00:01:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.723 00:01:40 -- nvmf/common.sh@47 -- # : 0 00:17:10.723 00:01:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.724 00:01:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.724 00:01:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.724 00:01:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.724 00:01:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.724 00:01:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.724 00:01:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.724 00:01:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.724 00:01:40 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:10.724 00:01:40 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:10.724 00:01:40 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:10.724 00:01:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:10.724 00:01:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.724 00:01:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:10.724 00:01:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:10.724 00:01:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:10.724 00:01:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.724 00:01:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.724 00:01:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.724 00:01:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:10.724 00:01:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:10.724 00:01:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.724 00:01:40 -- common/autotest_common.sh@10 -- # set +x 00:17:18.867 00:01:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:18.867 00:01:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:18.867 00:01:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:18.867 00:01:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:18.867 00:01:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:18.867 00:01:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:18.867 00:01:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:18.867 00:01:47 -- nvmf/common.sh@295 -- # net_devs=() 00:17:18.867 00:01:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:18.867 00:01:47 -- nvmf/common.sh@296 -- # e810=() 00:17:18.867 00:01:47 -- nvmf/common.sh@296 -- # local -ga e810 00:17:18.867 00:01:47 -- nvmf/common.sh@297 -- # x722=() 00:17:18.867 00:01:47 -- nvmf/common.sh@297 -- # local -ga x722 00:17:18.867 00:01:47 -- nvmf/common.sh@298 -- # mlx=() 00:17:18.867 00:01:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:18.867 00:01:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.867 00:01:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.867 00:01:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.867 00:01:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.867 00:01:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.867 00:01:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.867 00:01:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.867 00:01:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.867 00:01:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.867 00:01:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.867 00:01:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.867 00:01:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:18.867 00:01:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:18.867 00:01:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:18.867 00:01:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.867 00:01:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:18.867 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:18.867 00:01:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.867 00:01:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:18.867 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:18.867 00:01:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:18.867 00:01:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:18.867 00:01:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.868 00:01:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.868 00:01:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:18.868 00:01:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.868 00:01:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:18.868 Found net devices under 0000:31:00.0: cvl_0_0 00:17:18.868 00:01:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.868 00:01:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.868 00:01:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.868 00:01:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:18.868 00:01:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.868 00:01:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:18.868 Found net devices under 0000:31:00.1: cvl_0_1 00:17:18.868 00:01:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.868 00:01:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:18.868 00:01:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:18.868 00:01:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:18.868 00:01:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:18.868 00:01:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:18.868 00:01:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.868 00:01:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.868 00:01:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.868 00:01:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:18.868 00:01:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.868 00:01:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.868 00:01:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:18.868 00:01:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.868 00:01:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.868 00:01:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:18.868 00:01:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:18.868 00:01:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.868 00:01:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.868 00:01:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.868 00:01:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.868 00:01:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:18.868 00:01:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.868 00:01:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.868 00:01:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.868 00:01:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:18.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:17:18.868 00:17:18.868 --- 10.0.0.2 ping statistics --- 00:17:18.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.868 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:17:18.868 00:01:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:17:18.868 00:17:18.868 --- 10.0.0.1 ping statistics --- 00:17:18.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.868 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:17:18.868 00:01:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.868 00:01:47 -- nvmf/common.sh@411 -- # return 0 00:17:18.868 00:01:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:18.868 00:01:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.868 00:01:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:18.868 00:01:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:18.868 00:01:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.868 00:01:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:18.868 00:01:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:18.868 00:01:47 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:18.868 00:01:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:18.868 00:01:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:18.868 00:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:18.868 00:01:47 -- nvmf/common.sh@470 -- # nvmfpid=397707 00:17:18.868 00:01:47 -- nvmf/common.sh@471 -- # waitforlisten 397707 00:17:18.868 00:01:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:18.868 00:01:47 -- common/autotest_common.sh@817 -- # '[' -z 397707 ']' 00:17:18.868 00:01:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.868 00:01:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:18.868 00:01:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.868 00:01:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:18.868 00:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:18.868 [2024-04-27 00:01:47.951530] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:17:18.868 [2024-04-27 00:01:47.951580] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:18.868 [2024-04-27 00:01:48.040492] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.868 [2024-04-27 00:01:48.134492] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.868 [2024-04-27 00:01:48.134534] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.868 [2024-04-27 00:01:48.134542] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.868 [2024-04-27 00:01:48.134548] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.868 [2024-04-27 00:01:48.134554] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.868 [2024-04-27 00:01:48.134702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:18.868 [2024-04-27 00:01:48.134863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:18.868 [2024-04-27 00:01:48.134963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:18.868 [2024-04-27 00:01:48.135135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.868 00:01:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:18.868 00:01:48 -- common/autotest_common.sh@850 -- # return 0 00:17:18.868 00:01:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:18.868 00:01:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:18.868 00:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:18.868 00:01:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.868 00:01:48 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.868 00:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.868 00:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:18.868 [2024-04-27 00:01:48.777174] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.868 00:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.868 00:01:48 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.868 00:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.868 00:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:18.868 Malloc0 00:17:18.868 00:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.868 00:01:48 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:18.868 00:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.868 00:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:18.868 00:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.868 00:01:48 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.868 00:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.868 00:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:18.868 00:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.868 00:01:48 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.868 00:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.868 00:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:18.868 [2024-04-27 00:01:48.831302] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.868 00:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.868 00:01:48 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:18.868 00:01:48 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:18.868 00:01:48 -- nvmf/common.sh@521 -- # config=() 00:17:18.868 00:01:48 -- nvmf/common.sh@521 -- # local subsystem config 00:17:18.868 00:01:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.868 00:01:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.868 { 00:17:18.868 "params": { 00:17:18.868 "name": "Nvme$subsystem", 00:17:18.868 "trtype": "$TEST_TRANSPORT", 00:17:18.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.868 "adrfam": "ipv4", 00:17:18.868 "trsvcid": "$NVMF_PORT", 00:17:18.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.868 "hdgst": ${hdgst:-false}, 00:17:18.868 "ddgst": ${ddgst:-false} 00:17:18.868 }, 00:17:18.868 "method": "bdev_nvme_attach_controller" 00:17:18.868 } 00:17:18.868 EOF 00:17:18.868 )") 00:17:18.868 00:01:48 -- nvmf/common.sh@543 -- # cat 00:17:18.868 00:01:48 -- nvmf/common.sh@545 -- # jq . 00:17:18.868 00:01:48 -- nvmf/common.sh@546 -- # IFS=, 00:17:18.868 00:01:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:18.868 "params": { 00:17:18.868 "name": "Nvme1", 00:17:18.868 "trtype": "tcp", 00:17:18.868 "traddr": "10.0.0.2", 00:17:18.869 "adrfam": "ipv4", 00:17:18.869 "trsvcid": "4420", 00:17:18.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.869 "hdgst": false, 00:17:18.869 "ddgst": false 00:17:18.869 }, 00:17:18.869 "method": "bdev_nvme_attach_controller" 00:17:18.869 }' 00:17:18.869 [2024-04-27 00:01:48.885030] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:17:18.869 [2024-04-27 00:01:48.885098] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid397749 ] 00:17:18.869 [2024-04-27 00:01:48.955935] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:18.869 [2024-04-27 00:01:49.052674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.869 [2024-04-27 00:01:49.052816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.869 [2024-04-27 00:01:49.052820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.170 I/O targets: 00:17:19.170 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:19.170 00:17:19.170 00:17:19.170 CUnit - A unit testing framework for C - Version 2.1-3 00:17:19.170 http://cunit.sourceforge.net/ 00:17:19.170 00:17:19.170 00:17:19.170 Suite: bdevio tests on: Nvme1n1 00:17:19.474 Test: blockdev write read block ...passed 00:17:19.474 Test: blockdev write zeroes read block ...passed 00:17:19.474 Test: blockdev write zeroes read no split ...passed 00:17:19.474 Test: blockdev write zeroes read split ...passed 00:17:19.474 Test: blockdev write zeroes read split partial ...passed 00:17:19.474 Test: blockdev reset ...[2024-04-27 00:01:49.568230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:19.474 [2024-04-27 00:01:49.568295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x732fa0 (9): Bad file descriptor 00:17:19.475 [2024-04-27 00:01:49.620803] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:19.475 passed 00:17:19.475 Test: blockdev write read 8 blocks ...passed 00:17:19.475 Test: blockdev write read size > 128k ...passed 00:17:19.475 Test: blockdev write read invalid size ...passed 00:17:19.736 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:19.736 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:19.736 Test: blockdev write read max offset ...passed 00:17:19.736 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:19.736 Test: blockdev writev readv 8 blocks ...passed 00:17:19.736 Test: blockdev writev readv 30 x 1block ...passed 00:17:19.736 Test: blockdev writev readv block ...passed 00:17:19.736 Test: blockdev writev readv size > 128k ...passed 00:17:19.736 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:19.736 Test: blockdev comparev and writev ...[2024-04-27 00:01:49.887450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.736 [2024-04-27 00:01:49.887475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.736 [2024-04-27 00:01:49.887486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.736 [2024-04-27 00:01:49.887496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:19.736 [2024-04-27 00:01:49.888025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.736 [2024-04-27 00:01:49.888033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:19.736 [2024-04-27 00:01:49.888042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.736 [2024-04-27 00:01:49.888048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:19.736 [2024-04-27 00:01:49.888601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.736 [2024-04-27 00:01:49.888609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:19.736 [2024-04-27 00:01:49.888618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.736 [2024-04-27 00:01:49.888624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:19.736 [2024-04-27 00:01:49.889106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.736 [2024-04-27 00:01:49.889113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:19.736 [2024-04-27 00:01:49.889122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.736 [2024-04-27 00:01:49.889127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:19.736 passed 00:17:19.998 Test: blockdev nvme passthru rw ...passed 00:17:19.998 Test: blockdev nvme passthru vendor specific ...[2024-04-27 00:01:49.973807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.998 [2024-04-27 00:01:49.973818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:19.998 [2024-04-27 00:01:49.974193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.998 [2024-04-27 00:01:49.974201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:19.998 [2024-04-27 00:01:49.974562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.998 [2024-04-27 00:01:49.974569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:19.998 [2024-04-27 00:01:49.974961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.998 [2024-04-27 00:01:49.974968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:19.998 passed 00:17:19.998 Test: blockdev nvme admin passthru ...passed 00:17:19.998 Test: blockdev copy ...passed 00:17:19.998 00:17:19.998 Run Summary: Type Total Ran Passed Failed Inactive 00:17:19.998 suites 1 1 n/a 0 0 00:17:19.999 tests 23 23 23 0 0 00:17:19.999 asserts 152 152 152 0 n/a 00:17:19.999 00:17:19.999 Elapsed time = 1.341 seconds 00:17:20.260 00:01:50 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.260 00:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.260 00:01:50 -- common/autotest_common.sh@10 -- # set +x 00:17:20.260 00:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.260 00:01:50 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:20.260 00:01:50 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:20.260 00:01:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:20.260 00:01:50 -- nvmf/common.sh@117 -- # sync 00:17:20.260 00:01:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:20.260 00:01:50 -- nvmf/common.sh@120 -- # set +e 00:17:20.260 00:01:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:20.260 00:01:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:20.260 rmmod nvme_tcp 00:17:20.260 rmmod nvme_fabrics 00:17:20.260 rmmod nvme_keyring 00:17:20.260 00:01:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:20.260 00:01:50 -- nvmf/common.sh@124 -- # set -e 00:17:20.260 00:01:50 -- nvmf/common.sh@125 -- # return 0 00:17:20.260 00:01:50 -- nvmf/common.sh@478 -- # '[' -n 397707 ']' 00:17:20.260 00:01:50 -- nvmf/common.sh@479 -- # killprocess 397707 00:17:20.260 00:01:50 -- common/autotest_common.sh@936 -- # '[' -z 397707 ']' 00:17:20.260 00:01:50 -- common/autotest_common.sh@940 -- # kill -0 397707 00:17:20.260 00:01:50 -- common/autotest_common.sh@941 -- # uname 00:17:20.260 00:01:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.260 00:01:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 397707 00:17:20.260 00:01:50 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:20.260 00:01:50 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:20.261 00:01:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 397707' 00:17:20.261 killing process with pid 397707 00:17:20.261 00:01:50 -- common/autotest_common.sh@955 -- # kill 397707 00:17:20.261 00:01:50 -- common/autotest_common.sh@960 -- # wait 397707 00:17:20.523 00:01:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:20.523 00:01:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:20.523 00:01:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:20.523 00:01:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.523 00:01:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.523 00:01:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.523 00:01:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.523 00:01:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.072 00:01:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:23.072 00:17:23.072 real 0m11.950s 00:17:23.072 user 0m14.684s 00:17:23.072 sys 0m6.017s 00:17:23.072 00:01:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:23.072 00:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:23.072 ************************************ 00:17:23.072 END TEST nvmf_bdevio_no_huge 00:17:23.072 ************************************ 00:17:23.072 00:01:52 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:23.072 00:01:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:23.072 00:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:23.072 00:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:23.072 ************************************ 00:17:23.072 START TEST nvmf_tls 00:17:23.072 ************************************ 00:17:23.072 00:01:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:23.072 * Looking for test storage... 00:17:23.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.072 00:01:53 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.072 00:01:53 -- nvmf/common.sh@7 -- # uname -s 00:17:23.072 00:01:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.072 00:01:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.072 00:01:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.072 00:01:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.072 00:01:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.072 00:01:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.072 00:01:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.072 00:01:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.072 00:01:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.072 00:01:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.072 00:01:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.072 00:01:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.072 00:01:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.072 00:01:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.072 00:01:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.072 00:01:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.072 00:01:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.072 00:01:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.072 00:01:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.072 00:01:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.072 00:01:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.072 00:01:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.072 00:01:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.072 00:01:53 -- paths/export.sh@5 -- # export PATH 00:17:23.072 00:01:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.072 00:01:53 -- nvmf/common.sh@47 -- # : 0 00:17:23.072 00:01:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.072 00:01:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.072 00:01:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.072 00:01:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.072 00:01:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.072 00:01:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.072 00:01:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.072 00:01:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.072 00:01:53 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.072 00:01:53 -- target/tls.sh@62 -- # nvmftestinit 00:17:23.072 00:01:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:23.072 00:01:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.072 00:01:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:23.072 00:01:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:23.072 00:01:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:23.072 00:01:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.072 00:01:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.072 00:01:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.072 00:01:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:23.072 00:01:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:23.072 00:01:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:23.072 00:01:53 -- common/autotest_common.sh@10 -- # set +x 00:17:29.665 00:01:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:29.665 00:01:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:29.665 00:01:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:29.665 00:01:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:29.665 00:01:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:29.665 00:01:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:29.665 00:01:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:29.665 00:01:59 -- nvmf/common.sh@295 -- # net_devs=() 00:17:29.665 00:01:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:29.665 00:01:59 -- nvmf/common.sh@296 -- # e810=() 00:17:29.665 00:01:59 -- nvmf/common.sh@296 -- # local -ga e810 00:17:29.665 00:01:59 -- nvmf/common.sh@297 -- # x722=() 00:17:29.665 00:01:59 -- nvmf/common.sh@297 -- # local -ga x722 00:17:29.665 00:01:59 -- nvmf/common.sh@298 -- # mlx=() 00:17:29.665 00:01:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:29.665 00:01:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.665 00:01:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.665 00:01:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.665 00:01:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.665 00:01:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.665 00:01:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.665 00:01:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.665 00:01:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.665 00:01:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.665 00:01:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.665 00:01:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.665 00:01:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:29.665 00:01:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:29.665 00:01:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:29.665 00:01:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.665 00:01:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:29.665 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:29.665 00:01:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.665 00:01:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:29.665 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:29.665 00:01:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:29.665 00:01:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.665 00:01:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.665 00:01:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:29.665 00:01:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.665 00:01:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:29.665 Found net devices under 0000:31:00.0: cvl_0_0 00:17:29.665 00:01:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.665 00:01:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.665 00:01:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.665 00:01:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:29.665 00:01:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.665 00:01:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:29.665 Found net devices under 0000:31:00.1: cvl_0_1 00:17:29.665 00:01:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.665 00:01:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:29.665 00:01:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:29.665 00:01:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:29.665 00:01:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:29.665 00:01:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.665 00:01:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.665 00:01:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.665 00:01:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:29.665 00:01:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.665 00:01:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.665 00:01:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:29.665 00:01:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.665 00:01:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.665 00:01:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:29.665 00:01:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:29.928 00:01:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.928 00:01:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.928 00:02:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.928 00:02:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.928 00:02:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:29.928 00:02:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.928 00:02:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.190 00:02:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.190 00:02:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:17:30.190 00:17:30.190 --- 10.0.0.2 ping statistics --- 00:17:30.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.190 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:17:30.190 00:02:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:17:30.190 00:17:30.190 --- 10.0.0.1 ping statistics --- 00:17:30.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.190 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:17:30.190 00:02:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.190 00:02:00 -- nvmf/common.sh@411 -- # return 0 00:17:30.190 00:02:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:30.190 00:02:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.190 00:02:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:30.190 00:02:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:30.190 00:02:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.190 00:02:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:30.190 00:02:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:30.190 00:02:00 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:30.190 00:02:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:30.190 00:02:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:30.190 00:02:00 -- common/autotest_common.sh@10 -- # set +x 00:17:30.190 00:02:00 -- nvmf/common.sh@470 -- # nvmfpid=402470 00:17:30.190 00:02:00 -- nvmf/common.sh@471 -- # waitforlisten 402470 00:17:30.190 00:02:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:30.190 00:02:00 -- common/autotest_common.sh@817 -- # '[' -z 402470 ']' 00:17:30.190 00:02:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.190 00:02:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:30.190 00:02:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.190 00:02:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:30.190 00:02:00 -- common/autotest_common.sh@10 -- # set +x 00:17:30.190 [2024-04-27 00:02:00.271776] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:17:30.190 [2024-04-27 00:02:00.271826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.190 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.190 [2024-04-27 00:02:00.338360] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.190 [2024-04-27 00:02:00.400811] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.190 [2024-04-27 00:02:00.400855] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.190 [2024-04-27 00:02:00.400863] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.190 [2024-04-27 00:02:00.400869] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.190 [2024-04-27 00:02:00.400875] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.190 [2024-04-27 00:02:00.400899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.134 00:02:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:31.134 00:02:01 -- common/autotest_common.sh@850 -- # return 0 00:17:31.134 00:02:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:31.134 00:02:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:31.134 00:02:01 -- common/autotest_common.sh@10 -- # set +x 00:17:31.134 00:02:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.134 00:02:01 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:31.134 00:02:01 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:31.134 true 00:17:31.134 00:02:01 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.134 00:02:01 -- target/tls.sh@73 -- # jq -r .tls_version 00:17:31.395 00:02:01 -- target/tls.sh@73 -- # version=0 00:17:31.395 00:02:01 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:31.395 00:02:01 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:31.395 00:02:01 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.395 00:02:01 -- target/tls.sh@81 -- # jq -r .tls_version 00:17:31.656 00:02:01 -- target/tls.sh@81 -- # version=13 00:17:31.656 00:02:01 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:31.656 00:02:01 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:31.656 00:02:01 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.656 00:02:01 -- target/tls.sh@89 -- # jq -r .tls_version 00:17:31.916 00:02:01 -- target/tls.sh@89 -- # version=7 00:17:31.916 00:02:01 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:31.916 00:02:01 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.916 00:02:01 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:32.177 00:02:02 -- target/tls.sh@96 -- # ktls=false 00:17:32.177 00:02:02 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:32.177 00:02:02 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:32.177 00:02:02 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.177 00:02:02 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:32.438 00:02:02 -- target/tls.sh@104 -- # ktls=true 00:17:32.438 00:02:02 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:32.438 00:02:02 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:32.438 00:02:02 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.438 00:02:02 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:32.699 00:02:02 -- target/tls.sh@112 -- # ktls=false 00:17:32.699 00:02:02 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:32.699 00:02:02 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:32.699 00:02:02 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:32.699 00:02:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:32.699 00:02:02 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:32.699 00:02:02 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:17:32.699 00:02:02 -- nvmf/common.sh@693 -- # digest=1 00:17:32.699 00:02:02 -- nvmf/common.sh@694 -- # python - 00:17:32.699 00:02:02 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:32.699 00:02:02 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:32.699 00:02:02 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:32.699 00:02:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:32.699 00:02:02 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:32.699 00:02:02 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:17:32.699 00:02:02 -- nvmf/common.sh@693 -- # digest=1 00:17:32.699 00:02:02 -- nvmf/common.sh@694 -- # python - 00:17:32.699 00:02:02 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:32.699 00:02:02 -- target/tls.sh@121 -- # mktemp 00:17:32.699 00:02:02 -- target/tls.sh@121 -- # key_path=/tmp/tmp.iV9dfyylrF 00:17:32.699 00:02:02 -- target/tls.sh@122 -- # mktemp 00:17:32.699 00:02:02 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6QukJAt80H 00:17:32.699 00:02:02 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:32.699 00:02:02 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:32.699 00:02:02 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.iV9dfyylrF 00:17:32.699 00:02:02 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6QukJAt80H 00:17:32.699 00:02:02 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:32.959 00:02:02 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:33.219 00:02:03 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.iV9dfyylrF 00:17:33.219 00:02:03 -- target/tls.sh@49 -- # local key=/tmp/tmp.iV9dfyylrF 00:17:33.219 00:02:03 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:33.219 [2024-04-27 00:02:03.357362] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.219 00:02:03 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:33.481 00:02:03 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:33.481 [2024-04-27 00:02:03.662116] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:33.481 [2024-04-27 00:02:03.662344] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.481 00:02:03 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:33.742 malloc0 00:17:33.742 00:02:03 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:34.004 00:02:03 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iV9dfyylrF 00:17:34.004 [2024-04-27 00:02:04.109988] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:34.004 00:02:04 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.iV9dfyylrF 00:17:34.004 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.007 Initializing NVMe Controllers 00:17:44.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:44.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:44.007 Initialization complete. Launching workers. 00:17:44.007 ======================================================== 00:17:44.007 Latency(us) 00:17:44.007 Device Information : IOPS MiB/s Average min max 00:17:44.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13592.05 53.09 4709.25 1080.33 5350.12 00:17:44.007 ======================================================== 00:17:44.007 Total : 13592.05 53.09 4709.25 1080.33 5350.12 00:17:44.007 00:17:44.268 00:02:14 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iV9dfyylrF 00:17:44.268 00:02:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.268 00:02:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:44.268 00:02:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:44.268 00:02:14 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iV9dfyylrF' 00:17:44.268 00:02:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.268 00:02:14 -- target/tls.sh@28 -- # bdevperf_pid=405207 00:17:44.268 00:02:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.268 00:02:14 -- target/tls.sh@31 -- # waitforlisten 405207 /var/tmp/bdevperf.sock 00:17:44.268 00:02:14 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.268 00:02:14 -- common/autotest_common.sh@817 -- # '[' -z 405207 ']' 00:17:44.268 00:02:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.268 00:02:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:44.268 00:02:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.268 00:02:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:44.268 00:02:14 -- common/autotest_common.sh@10 -- # set +x 00:17:44.268 [2024-04-27 00:02:14.280246] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:17:44.268 [2024-04-27 00:02:14.280302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405207 ] 00:17:44.268 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.268 [2024-04-27 00:02:14.329998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.268 [2024-04-27 00:02:14.381480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.839 00:02:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:44.839 00:02:15 -- common/autotest_common.sh@850 -- # return 0 00:17:44.839 00:02:15 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iV9dfyylrF 00:17:45.100 [2024-04-27 00:02:15.146253] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.100 [2024-04-27 00:02:15.146310] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:45.100 TLSTESTn1 00:17:45.100 00:02:15 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:45.100 Running I/O for 10 seconds... 00:17:57.335 00:17:57.335 Latency(us) 00:17:57.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.335 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:57.335 Verification LBA range: start 0x0 length 0x2000 00:17:57.335 TLSTESTn1 : 10.10 2736.38 10.69 0.00 0.00 46561.79 6116.69 237677.23 00:17:57.335 =================================================================================================================== 00:17:57.335 Total : 2736.38 10.69 0.00 0.00 46561.79 6116.69 237677.23 00:17:57.335 0 00:17:57.335 00:02:25 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:57.335 00:02:25 -- target/tls.sh@45 -- # killprocess 405207 00:17:57.335 00:02:25 -- common/autotest_common.sh@936 -- # '[' -z 405207 ']' 00:17:57.335 00:02:25 -- common/autotest_common.sh@940 -- # kill -0 405207 00:17:57.335 00:02:25 -- common/autotest_common.sh@941 -- # uname 00:17:57.335 00:02:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.335 00:02:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 405207 00:17:57.335 00:02:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:57.335 00:02:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:57.335 00:02:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 405207' 00:17:57.335 killing process with pid 405207 00:17:57.335 00:02:25 -- common/autotest_common.sh@955 -- # kill 405207 00:17:57.335 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.335 00:17:57.335 Latency(us) 00:17:57.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.335 =================================================================================================================== 00:17:57.335 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.335 [2024-04-27 00:02:25.507691] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:57.335 00:02:25 -- common/autotest_common.sh@960 -- # wait 405207 00:17:57.335 00:02:25 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6QukJAt80H 00:17:57.335 00:02:25 -- common/autotest_common.sh@638 -- # local es=0 00:17:57.335 00:02:25 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6QukJAt80H 00:17:57.335 00:02:25 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:57.336 00:02:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:57.336 00:02:25 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:57.336 00:02:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:57.336 00:02:25 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6QukJAt80H 00:17:57.336 00:02:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.336 00:02:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:57.336 00:02:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.336 00:02:25 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6QukJAt80H' 00:17:57.336 00:02:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.336 00:02:25 -- target/tls.sh@28 -- # bdevperf_pid=407417 00:17:57.336 00:02:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.336 00:02:25 -- target/tls.sh@31 -- # waitforlisten 407417 /var/tmp/bdevperf.sock 00:17:57.336 00:02:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.336 00:02:25 -- common/autotest_common.sh@817 -- # '[' -z 407417 ']' 00:17:57.336 00:02:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.336 00:02:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:57.336 00:02:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.336 00:02:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:57.336 00:02:25 -- common/autotest_common.sh@10 -- # set +x 00:17:57.336 [2024-04-27 00:02:25.679225] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:17:57.336 [2024-04-27 00:02:25.679325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407417 ] 00:17:57.336 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.336 [2024-04-27 00:02:25.731907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.336 [2024-04-27 00:02:25.783697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.336 00:02:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:57.336 00:02:26 -- common/autotest_common.sh@850 -- # return 0 00:17:57.336 00:02:26 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6QukJAt80H 00:17:57.336 [2024-04-27 00:02:26.588597] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.336 [2024-04-27 00:02:26.588656] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:57.336 [2024-04-27 00:02:26.598445] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:57.336 [2024-04-27 00:02:26.598701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x811c00 (107): Transport endpoint is not connected 00:17:57.336 [2024-04-27 00:02:26.599696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x811c00 (9): Bad file descriptor 00:17:57.336 [2024-04-27 00:02:26.600698] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:57.336 [2024-04-27 00:02:26.600705] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:57.336 [2024-04-27 00:02:26.600710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:57.336 request: 00:17:57.336 { 00:17:57.336 "name": "TLSTEST", 00:17:57.336 "trtype": "tcp", 00:17:57.336 "traddr": "10.0.0.2", 00:17:57.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.336 "adrfam": "ipv4", 00:17:57.336 "trsvcid": "4420", 00:17:57.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.336 "psk": "/tmp/tmp.6QukJAt80H", 00:17:57.336 "method": "bdev_nvme_attach_controller", 00:17:57.336 "req_id": 1 00:17:57.336 } 00:17:57.336 Got JSON-RPC error response 00:17:57.336 response: 00:17:57.336 { 00:17:57.336 "code": -32602, 00:17:57.336 "message": "Invalid parameters" 00:17:57.336 } 00:17:57.336 00:02:26 -- target/tls.sh@36 -- # killprocess 407417 00:17:57.336 00:02:26 -- common/autotest_common.sh@936 -- # '[' -z 407417 ']' 00:17:57.336 00:02:26 -- common/autotest_common.sh@940 -- # kill -0 407417 00:17:57.336 00:02:26 -- common/autotest_common.sh@941 -- # uname 00:17:57.336 00:02:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.336 00:02:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 407417 00:17:57.336 00:02:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:57.336 00:02:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:57.336 00:02:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 407417' 00:17:57.336 killing process with pid 407417 00:17:57.336 00:02:26 -- common/autotest_common.sh@955 -- # kill 407417 00:17:57.336 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.336 00:17:57.336 Latency(us) 00:17:57.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.336 =================================================================================================================== 00:17:57.336 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.336 [2024-04-27 00:02:26.672995] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:57.336 00:02:26 -- common/autotest_common.sh@960 -- # wait 407417 00:17:57.336 00:02:26 -- target/tls.sh@37 -- # return 1 00:17:57.336 00:02:26 -- common/autotest_common.sh@641 -- # es=1 00:17:57.336 00:02:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:57.336 00:02:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:57.336 00:02:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:57.336 00:02:26 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iV9dfyylrF 00:17:57.336 00:02:26 -- common/autotest_common.sh@638 -- # local es=0 00:17:57.336 00:02:26 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iV9dfyylrF 00:17:57.336 00:02:26 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:57.336 00:02:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:57.337 00:02:26 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:57.337 00:02:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:57.337 00:02:26 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iV9dfyylrF 00:17:57.337 00:02:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.337 00:02:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:57.337 00:02:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:57.337 00:02:26 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iV9dfyylrF' 00:17:57.337 00:02:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.337 00:02:26 -- target/tls.sh@28 -- # bdevperf_pid=407568 00:17:57.337 00:02:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.337 00:02:26 -- target/tls.sh@31 -- # waitforlisten 407568 /var/tmp/bdevperf.sock 00:17:57.337 00:02:26 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.337 00:02:26 -- common/autotest_common.sh@817 -- # '[' -z 407568 ']' 00:17:57.337 00:02:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.337 00:02:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:57.337 00:02:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.337 00:02:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:57.337 00:02:26 -- common/autotest_common.sh@10 -- # set +x 00:17:57.337 [2024-04-27 00:02:26.835532] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:17:57.337 [2024-04-27 00:02:26.835633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407568 ] 00:17:57.337 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.337 [2024-04-27 00:02:26.888467] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.337 [2024-04-27 00:02:26.939649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.598 00:02:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:57.598 00:02:27 -- common/autotest_common.sh@850 -- # return 0 00:17:57.598 00:02:27 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.iV9dfyylrF 00:17:57.598 [2024-04-27 00:02:27.732615] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.598 [2024-04-27 00:02:27.732672] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:57.598 [2024-04-27 00:02:27.741161] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:57.598 [2024-04-27 00:02:27.741184] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:57.598 [2024-04-27 00:02:27.741209] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:57.598 [2024-04-27 00:02:27.741618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1407c00 (107): Transport endpoint is not connected 00:17:57.598 [2024-04-27 00:02:27.742612] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1407c00 (9): Bad file descriptor 00:17:57.598 [2024-04-27 00:02:27.743614] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:57.598 [2024-04-27 00:02:27.743620] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:57.598 [2024-04-27 00:02:27.743626] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:57.598 request: 00:17:57.598 { 00:17:57.598 "name": "TLSTEST", 00:17:57.598 "trtype": "tcp", 00:17:57.598 "traddr": "10.0.0.2", 00:17:57.598 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:57.598 "adrfam": "ipv4", 00:17:57.598 "trsvcid": "4420", 00:17:57.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.598 "psk": "/tmp/tmp.iV9dfyylrF", 00:17:57.598 "method": "bdev_nvme_attach_controller", 00:17:57.598 "req_id": 1 00:17:57.598 } 00:17:57.598 Got JSON-RPC error response 00:17:57.598 response: 00:17:57.598 { 00:17:57.598 "code": -32602, 00:17:57.598 "message": "Invalid parameters" 00:17:57.598 } 00:17:57.598 00:02:27 -- target/tls.sh@36 -- # killprocess 407568 00:17:57.598 00:02:27 -- common/autotest_common.sh@936 -- # '[' -z 407568 ']' 00:17:57.598 00:02:27 -- common/autotest_common.sh@940 -- # kill -0 407568 00:17:57.598 00:02:27 -- common/autotest_common.sh@941 -- # uname 00:17:57.598 00:02:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.598 00:02:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 407568 00:17:57.598 00:02:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:57.598 00:02:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:57.598 00:02:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 407568' 00:17:57.598 killing process with pid 407568 00:17:57.598 00:02:27 -- common/autotest_common.sh@955 -- # kill 407568 00:17:57.598 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.598 00:17:57.598 Latency(us) 00:17:57.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.598 =================================================================================================================== 00:17:57.598 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.598 [2024-04-27 00:02:27.811187] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:57.598 00:02:27 -- common/autotest_common.sh@960 -- # wait 407568 00:17:57.859 00:02:27 -- target/tls.sh@37 -- # return 1 00:17:57.859 00:02:27 -- common/autotest_common.sh@641 -- # es=1 00:17:57.859 00:02:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:57.859 00:02:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:57.859 00:02:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:57.859 00:02:27 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iV9dfyylrF 00:17:57.859 00:02:27 -- common/autotest_common.sh@638 -- # local es=0 00:17:57.859 00:02:27 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iV9dfyylrF 00:17:57.859 00:02:27 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:57.859 00:02:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:57.859 00:02:27 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:57.859 00:02:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:57.859 00:02:27 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iV9dfyylrF 00:17:57.859 00:02:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.859 00:02:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:57.859 00:02:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.860 00:02:27 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iV9dfyylrF' 00:17:57.860 00:02:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.860 00:02:27 -- target/tls.sh@28 -- # bdevperf_pid=407902 00:17:57.860 00:02:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.860 00:02:27 -- target/tls.sh@31 -- # waitforlisten 407902 /var/tmp/bdevperf.sock 00:17:57.860 00:02:27 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.860 00:02:27 -- common/autotest_common.sh@817 -- # '[' -z 407902 ']' 00:17:57.860 00:02:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.860 00:02:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:57.860 00:02:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.860 00:02:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:57.860 00:02:27 -- common/autotest_common.sh@10 -- # set +x 00:17:57.860 [2024-04-27 00:02:27.964660] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:17:57.860 [2024-04-27 00:02:27.964714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407902 ] 00:17:57.860 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.860 [2024-04-27 00:02:28.015422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.860 [2024-04-27 00:02:28.065536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.804 00:02:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:58.804 00:02:28 -- common/autotest_common.sh@850 -- # return 0 00:17:58.804 00:02:28 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iV9dfyylrF 00:17:58.804 [2024-04-27 00:02:28.874592] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.804 [2024-04-27 00:02:28.874680] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:58.804 [2024-04-27 00:02:28.883981] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:58.804 [2024-04-27 00:02:28.884003] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:58.804 [2024-04-27 00:02:28.884028] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:58.804 [2024-04-27 00:02:28.884682] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116c00 (107): Transport endpoint is not connected 00:17:58.804 [2024-04-27 00:02:28.885676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116c00 (9): Bad file descriptor 00:17:58.804 [2024-04-27 00:02:28.886678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:58.804 [2024-04-27 00:02:28.886685] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:58.805 [2024-04-27 00:02:28.886690] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:58.805 request: 00:17:58.805 { 00:17:58.805 "name": "TLSTEST", 00:17:58.805 "trtype": "tcp", 00:17:58.805 "traddr": "10.0.0.2", 00:17:58.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.805 "adrfam": "ipv4", 00:17:58.805 "trsvcid": "4420", 00:17:58.805 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:58.805 "psk": "/tmp/tmp.iV9dfyylrF", 00:17:58.805 "method": "bdev_nvme_attach_controller", 00:17:58.805 "req_id": 1 00:17:58.805 } 00:17:58.805 Got JSON-RPC error response 00:17:58.805 response: 00:17:58.805 { 00:17:58.805 "code": -32602, 00:17:58.805 "message": "Invalid parameters" 00:17:58.805 } 00:17:58.805 00:02:28 -- target/tls.sh@36 -- # killprocess 407902 00:17:58.805 00:02:28 -- common/autotest_common.sh@936 -- # '[' -z 407902 ']' 00:17:58.805 00:02:28 -- common/autotest_common.sh@940 -- # kill -0 407902 00:17:58.805 00:02:28 -- common/autotest_common.sh@941 -- # uname 00:17:58.805 00:02:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:58.805 00:02:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 407902 00:17:58.805 00:02:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:58.805 00:02:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:58.805 00:02:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 407902' 00:17:58.805 killing process with pid 407902 00:17:58.805 00:02:28 -- common/autotest_common.sh@955 -- # kill 407902 00:17:58.805 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.805 00:17:58.805 Latency(us) 00:17:58.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.805 =================================================================================================================== 00:17:58.805 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.805 [2024-04-27 00:02:28.974730] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:58.805 00:02:28 -- common/autotest_common.sh@960 -- # wait 407902 00:17:59.066 00:02:29 -- target/tls.sh@37 -- # return 1 00:17:59.066 00:02:29 -- common/autotest_common.sh@641 -- # es=1 00:17:59.066 00:02:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:59.066 00:02:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:59.066 00:02:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:59.066 00:02:29 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.066 00:02:29 -- common/autotest_common.sh@638 -- # local es=0 00:17:59.066 00:02:29 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.066 00:02:29 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:59.066 00:02:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:59.066 00:02:29 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:59.066 00:02:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:59.066 00:02:29 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:59.066 00:02:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.066 00:02:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:59.066 00:02:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.066 00:02:29 -- target/tls.sh@23 -- # psk= 00:17:59.066 00:02:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.066 00:02:29 -- target/tls.sh@28 -- # bdevperf_pid=408115 00:17:59.066 00:02:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.066 00:02:29 -- target/tls.sh@31 -- # waitforlisten 408115 /var/tmp/bdevperf.sock 00:17:59.066 00:02:29 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.066 00:02:29 -- common/autotest_common.sh@817 -- # '[' -z 408115 ']' 00:17:59.066 00:02:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.066 00:02:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:59.066 00:02:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.066 00:02:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:59.066 00:02:29 -- common/autotest_common.sh@10 -- # set +x 00:17:59.066 [2024-04-27 00:02:29.129729] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:17:59.066 [2024-04-27 00:02:29.129781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408115 ] 00:17:59.066 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.066 [2024-04-27 00:02:29.180425] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.066 [2024-04-27 00:02:29.232647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.010 00:02:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:00.010 00:02:29 -- common/autotest_common.sh@850 -- # return 0 00:18:00.010 00:02:29 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:00.010 [2024-04-27 00:02:30.046897] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:00.010 [2024-04-27 00:02:30.048578] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf935a0 (9): Bad file descriptor 00:18:00.010 [2024-04-27 00:02:30.049576] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:00.010 [2024-04-27 00:02:30.049585] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:00.010 [2024-04-27 00:02:30.049590] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:00.010 request: 00:18:00.010 { 00:18:00.010 "name": "TLSTEST", 00:18:00.010 "trtype": "tcp", 00:18:00.010 "traddr": "10.0.0.2", 00:18:00.010 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.010 "adrfam": "ipv4", 00:18:00.010 "trsvcid": "4420", 00:18:00.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.010 "method": "bdev_nvme_attach_controller", 00:18:00.010 "req_id": 1 00:18:00.011 } 00:18:00.011 Got JSON-RPC error response 00:18:00.011 response: 00:18:00.011 { 00:18:00.011 "code": -32602, 00:18:00.011 "message": "Invalid parameters" 00:18:00.011 } 00:18:00.011 00:02:30 -- target/tls.sh@36 -- # killprocess 408115 00:18:00.011 00:02:30 -- common/autotest_common.sh@936 -- # '[' -z 408115 ']' 00:18:00.011 00:02:30 -- common/autotest_common.sh@940 -- # kill -0 408115 00:18:00.011 00:02:30 -- common/autotest_common.sh@941 -- # uname 00:18:00.011 00:02:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.011 00:02:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 408115 00:18:00.011 00:02:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:00.011 00:02:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:00.011 00:02:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 408115' 00:18:00.011 killing process with pid 408115 00:18:00.011 00:02:30 -- common/autotest_common.sh@955 -- # kill 408115 00:18:00.011 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.011 00:18:00.011 Latency(us) 00:18:00.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.011 =================================================================================================================== 00:18:00.011 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.011 00:02:30 -- common/autotest_common.sh@960 -- # wait 408115 00:18:00.273 00:02:30 -- target/tls.sh@37 -- # return 1 00:18:00.273 00:02:30 -- common/autotest_common.sh@641 -- # es=1 00:18:00.273 00:02:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:00.273 00:02:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:00.273 00:02:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:00.273 00:02:30 -- target/tls.sh@158 -- # killprocess 402470 00:18:00.273 00:02:30 -- common/autotest_common.sh@936 -- # '[' -z 402470 ']' 00:18:00.273 00:02:30 -- common/autotest_common.sh@940 -- # kill -0 402470 00:18:00.273 00:02:30 -- common/autotest_common.sh@941 -- # uname 00:18:00.273 00:02:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.273 00:02:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 402470 00:18:00.273 00:02:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:00.273 00:02:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:00.273 00:02:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 402470' 00:18:00.273 killing process with pid 402470 00:18:00.273 00:02:30 -- common/autotest_common.sh@955 -- # kill 402470 00:18:00.273 [2024-04-27 00:02:30.293969] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:00.273 00:02:30 -- common/autotest_common.sh@960 -- # wait 402470 00:18:00.273 00:02:30 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.273 00:02:30 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.273 00:02:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:00.273 00:02:30 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:00.273 00:02:30 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:00.273 00:02:30 -- nvmf/common.sh@693 -- # digest=2 00:18:00.273 00:02:30 -- nvmf/common.sh@694 -- # python - 00:18:00.273 00:02:30 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.273 00:02:30 -- target/tls.sh@160 -- # mktemp 00:18:00.273 00:02:30 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ApF9KQHZmo 00:18:00.273 00:02:30 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.273 00:02:30 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ApF9KQHZmo 00:18:00.273 00:02:30 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:00.273 00:02:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:00.273 00:02:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:00.273 00:02:30 -- common/autotest_common.sh@10 -- # set +x 00:18:00.534 00:02:30 -- nvmf/common.sh@470 -- # nvmfpid=408320 00:18:00.534 00:02:30 -- nvmf/common.sh@471 -- # waitforlisten 408320 00:18:00.534 00:02:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.534 00:02:30 -- common/autotest_common.sh@817 -- # '[' -z 408320 ']' 00:18:00.534 00:02:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.534 00:02:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:00.534 00:02:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.534 00:02:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:00.534 00:02:30 -- common/autotest_common.sh@10 -- # set +x 00:18:00.534 [2024-04-27 00:02:30.547306] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:00.534 [2024-04-27 00:02:30.547361] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.534 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.534 [2024-04-27 00:02:30.615243] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.534 [2024-04-27 00:02:30.682174] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.534 [2024-04-27 00:02:30.682216] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.534 [2024-04-27 00:02:30.682224] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.534 [2024-04-27 00:02:30.682231] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.534 [2024-04-27 00:02:30.682236] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.534 [2024-04-27 00:02:30.682257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.106 00:02:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:01.106 00:02:31 -- common/autotest_common.sh@850 -- # return 0 00:18:01.106 00:02:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:01.106 00:02:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:01.106 00:02:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.367 00:02:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.367 00:02:31 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ApF9KQHZmo 00:18:01.367 00:02:31 -- target/tls.sh@49 -- # local key=/tmp/tmp.ApF9KQHZmo 00:18:01.367 00:02:31 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:01.367 [2024-04-27 00:02:31.489177] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.367 00:02:31 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:01.628 00:02:31 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:01.628 [2024-04-27 00:02:31.797947] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.628 [2024-04-27 00:02:31.798172] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.628 00:02:31 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:01.889 malloc0 00:18:01.889 00:02:31 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:01.889 00:02:32 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ApF9KQHZmo 00:18:02.151 [2024-04-27 00:02:32.229815] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:02.151 00:02:32 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApF9KQHZmo 00:18:02.151 00:02:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.151 00:02:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.151 00:02:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:02.151 00:02:32 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ApF9KQHZmo' 00:18:02.151 00:02:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.151 00:02:32 -- target/tls.sh@28 -- # bdevperf_pid=408684 00:18:02.151 00:02:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.151 00:02:32 -- target/tls.sh@31 -- # waitforlisten 408684 /var/tmp/bdevperf.sock 00:18:02.151 00:02:32 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.151 00:02:32 -- common/autotest_common.sh@817 -- # '[' -z 408684 ']' 00:18:02.151 00:02:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.151 00:02:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:02.151 00:02:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.151 00:02:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:02.151 00:02:32 -- common/autotest_common.sh@10 -- # set +x 00:18:02.151 [2024-04-27 00:02:32.294882] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:02.151 [2024-04-27 00:02:32.294933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408684 ] 00:18:02.151 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.151 [2024-04-27 00:02:32.344373] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.413 [2024-04-27 00:02:32.396050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.983 00:02:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:02.983 00:02:33 -- common/autotest_common.sh@850 -- # return 0 00:18:02.983 00:02:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ApF9KQHZmo 00:18:02.983 [2024-04-27 00:02:33.196923] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.983 [2024-04-27 00:02:33.196980] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:03.243 TLSTESTn1 00:18:03.243 00:02:33 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:03.243 Running I/O for 10 seconds... 00:18:13.335 00:18:13.336 Latency(us) 00:18:13.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.336 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.336 Verification LBA range: start 0x0 length 0x2000 00:18:13.336 TLSTESTn1 : 10.03 4919.34 19.22 0.00 0.00 25977.64 5679.79 63788.37 00:18:13.336 =================================================================================================================== 00:18:13.336 Total : 4919.34 19.22 0.00 0.00 25977.64 5679.79 63788.37 00:18:13.336 0 00:18:13.336 00:02:43 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.336 00:02:43 -- target/tls.sh@45 -- # killprocess 408684 00:18:13.336 00:02:43 -- common/autotest_common.sh@936 -- # '[' -z 408684 ']' 00:18:13.336 00:02:43 -- common/autotest_common.sh@940 -- # kill -0 408684 00:18:13.336 00:02:43 -- common/autotest_common.sh@941 -- # uname 00:18:13.336 00:02:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:13.336 00:02:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 408684 00:18:13.336 00:02:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:13.336 00:02:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:13.336 00:02:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 408684' 00:18:13.336 killing process with pid 408684 00:18:13.336 00:02:43 -- common/autotest_common.sh@955 -- # kill 408684 00:18:13.336 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.336 00:18:13.336 Latency(us) 00:18:13.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.336 =================================================================================================================== 00:18:13.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.336 [2024-04-27 00:02:43.502184] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:13.336 00:02:43 -- common/autotest_common.sh@960 -- # wait 408684 00:18:13.595 00:02:43 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ApF9KQHZmo 00:18:13.595 00:02:43 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApF9KQHZmo 00:18:13.595 00:02:43 -- common/autotest_common.sh@638 -- # local es=0 00:18:13.595 00:02:43 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApF9KQHZmo 00:18:13.596 00:02:43 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:13.596 00:02:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:13.596 00:02:43 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:13.596 00:02:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:13.596 00:02:43 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApF9KQHZmo 00:18:13.596 00:02:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.596 00:02:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.596 00:02:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.596 00:02:43 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ApF9KQHZmo' 00:18:13.596 00:02:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.596 00:02:43 -- target/tls.sh@28 -- # bdevperf_pid=410979 00:18:13.596 00:02:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.596 00:02:43 -- target/tls.sh@31 -- # waitforlisten 410979 /var/tmp/bdevperf.sock 00:18:13.596 00:02:43 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.596 00:02:43 -- common/autotest_common.sh@817 -- # '[' -z 410979 ']' 00:18:13.596 00:02:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.596 00:02:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:13.596 00:02:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.596 00:02:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:13.596 00:02:43 -- common/autotest_common.sh@10 -- # set +x 00:18:13.596 [2024-04-27 00:02:43.670093] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:13.596 [2024-04-27 00:02:43.670155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410979 ] 00:18:13.596 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.596 [2024-04-27 00:02:43.719400] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.596 [2024-04-27 00:02:43.770509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.534 00:02:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:14.534 00:02:44 -- common/autotest_common.sh@850 -- # return 0 00:18:14.534 00:02:44 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ApF9KQHZmo 00:18:14.534 [2024-04-27 00:02:44.571366] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.534 [2024-04-27 00:02:44.571409] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:14.534 [2024-04-27 00:02:44.571414] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ApF9KQHZmo 00:18:14.534 request: 00:18:14.534 { 00:18:14.534 "name": "TLSTEST", 00:18:14.534 "trtype": "tcp", 00:18:14.534 "traddr": "10.0.0.2", 00:18:14.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.534 "adrfam": "ipv4", 00:18:14.534 "trsvcid": "4420", 00:18:14.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.534 "psk": "/tmp/tmp.ApF9KQHZmo", 00:18:14.534 "method": "bdev_nvme_attach_controller", 00:18:14.534 "req_id": 1 00:18:14.534 } 00:18:14.534 Got JSON-RPC error response 00:18:14.534 response: 00:18:14.534 { 00:18:14.534 "code": -1, 00:18:14.534 "message": "Operation not permitted" 00:18:14.534 } 00:18:14.534 00:02:44 -- target/tls.sh@36 -- # killprocess 410979 00:18:14.534 00:02:44 -- common/autotest_common.sh@936 -- # '[' -z 410979 ']' 00:18:14.534 00:02:44 -- common/autotest_common.sh@940 -- # kill -0 410979 00:18:14.534 00:02:44 -- common/autotest_common.sh@941 -- # uname 00:18:14.534 00:02:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:14.534 00:02:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 410979 00:18:14.534 00:02:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:14.534 00:02:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:14.534 00:02:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 410979' 00:18:14.534 killing process with pid 410979 00:18:14.534 00:02:44 -- common/autotest_common.sh@955 -- # kill 410979 00:18:14.534 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.534 00:18:14.534 Latency(us) 00:18:14.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.534 =================================================================================================================== 00:18:14.534 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.534 00:02:44 -- common/autotest_common.sh@960 -- # wait 410979 00:18:14.534 00:02:44 -- target/tls.sh@37 -- # return 1 00:18:14.534 00:02:44 -- common/autotest_common.sh@641 -- # es=1 00:18:14.534 00:02:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:14.534 00:02:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:14.534 00:02:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:14.534 00:02:44 -- target/tls.sh@174 -- # killprocess 408320 00:18:14.535 00:02:44 -- common/autotest_common.sh@936 -- # '[' -z 408320 ']' 00:18:14.535 00:02:44 -- common/autotest_common.sh@940 -- # kill -0 408320 00:18:14.535 00:02:44 -- common/autotest_common.sh@941 -- # uname 00:18:14.535 00:02:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:14.795 00:02:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 408320 00:18:14.795 00:02:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:14.795 00:02:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:14.795 00:02:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 408320' 00:18:14.795 killing process with pid 408320 00:18:14.795 00:02:44 -- common/autotest_common.sh@955 -- # kill 408320 00:18:14.795 [2024-04-27 00:02:44.805327] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:14.795 00:02:44 -- common/autotest_common.sh@960 -- # wait 408320 00:18:14.795 00:02:44 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:14.795 00:02:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:14.795 00:02:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:14.795 00:02:44 -- common/autotest_common.sh@10 -- # set +x 00:18:14.795 00:02:44 -- nvmf/common.sh@470 -- # nvmfpid=411326 00:18:14.795 00:02:44 -- nvmf/common.sh@471 -- # waitforlisten 411326 00:18:14.795 00:02:44 -- common/autotest_common.sh@817 -- # '[' -z 411326 ']' 00:18:14.795 00:02:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:14.795 00:02:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.795 00:02:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:14.795 00:02:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.795 00:02:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:14.795 00:02:44 -- common/autotest_common.sh@10 -- # set +x 00:18:14.795 [2024-04-27 00:02:45.012729] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:14.795 [2024-04-27 00:02:45.012789] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.056 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.056 [2024-04-27 00:02:45.077360] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.056 [2024-04-27 00:02:45.141189] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.056 [2024-04-27 00:02:45.141227] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.056 [2024-04-27 00:02:45.141234] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.056 [2024-04-27 00:02:45.141241] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.056 [2024-04-27 00:02:45.141247] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.056 [2024-04-27 00:02:45.141273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.626 00:02:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:15.626 00:02:45 -- common/autotest_common.sh@850 -- # return 0 00:18:15.626 00:02:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:15.626 00:02:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:15.626 00:02:45 -- common/autotest_common.sh@10 -- # set +x 00:18:15.626 00:02:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.626 00:02:45 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ApF9KQHZmo 00:18:15.626 00:02:45 -- common/autotest_common.sh@638 -- # local es=0 00:18:15.626 00:02:45 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ApF9KQHZmo 00:18:15.626 00:02:45 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:18:15.626 00:02:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.626 00:02:45 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:18:15.626 00:02:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.626 00:02:45 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.ApF9KQHZmo 00:18:15.626 00:02:45 -- target/tls.sh@49 -- # local key=/tmp/tmp.ApF9KQHZmo 00:18:15.626 00:02:45 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:15.886 [2024-04-27 00:02:45.940122] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.886 00:02:45 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:15.886 00:02:46 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:16.145 [2024-04-27 00:02:46.212805] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.145 [2024-04-27 00:02:46.213023] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.145 00:02:46 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:16.405 malloc0 00:18:16.405 00:02:46 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:16.405 00:02:46 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ApF9KQHZmo 00:18:16.665 [2024-04-27 00:02:46.660683] tcp.c:3565:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:16.665 [2024-04-27 00:02:46.660708] tcp.c:3651:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:16.665 [2024-04-27 00:02:46.660731] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:16.665 request: 00:18:16.665 { 00:18:16.665 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.665 "host": "nqn.2016-06.io.spdk:host1", 00:18:16.665 "psk": "/tmp/tmp.ApF9KQHZmo", 00:18:16.665 "method": "nvmf_subsystem_add_host", 00:18:16.665 "req_id": 1 00:18:16.665 } 00:18:16.665 Got JSON-RPC error response 00:18:16.665 response: 00:18:16.665 { 00:18:16.665 "code": -32603, 00:18:16.665 "message": "Internal error" 00:18:16.665 } 00:18:16.665 00:02:46 -- common/autotest_common.sh@641 -- # es=1 00:18:16.665 00:02:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:16.665 00:02:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:16.665 00:02:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:16.665 00:02:46 -- target/tls.sh@180 -- # killprocess 411326 00:18:16.665 00:02:46 -- common/autotest_common.sh@936 -- # '[' -z 411326 ']' 00:18:16.665 00:02:46 -- common/autotest_common.sh@940 -- # kill -0 411326 00:18:16.665 00:02:46 -- common/autotest_common.sh@941 -- # uname 00:18:16.665 00:02:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:16.665 00:02:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 411326 00:18:16.665 00:02:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:16.665 00:02:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:16.665 00:02:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 411326' 00:18:16.665 killing process with pid 411326 00:18:16.665 00:02:46 -- common/autotest_common.sh@955 -- # kill 411326 00:18:16.665 00:02:46 -- common/autotest_common.sh@960 -- # wait 411326 00:18:16.665 00:02:46 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ApF9KQHZmo 00:18:16.665 00:02:46 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:16.665 00:02:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:16.665 00:02:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:16.665 00:02:46 -- common/autotest_common.sh@10 -- # set +x 00:18:16.665 00:02:46 -- nvmf/common.sh@470 -- # nvmfpid=411697 00:18:16.665 00:02:46 -- nvmf/common.sh@471 -- # waitforlisten 411697 00:18:16.665 00:02:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:16.665 00:02:46 -- common/autotest_common.sh@817 -- # '[' -z 411697 ']' 00:18:16.665 00:02:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.665 00:02:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:16.665 00:02:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.665 00:02:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:16.665 00:02:46 -- common/autotest_common.sh@10 -- # set +x 00:18:16.926 [2024-04-27 00:02:46.930851] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:16.926 [2024-04-27 00:02:46.930911] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.926 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.926 [2024-04-27 00:02:46.995905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.926 [2024-04-27 00:02:47.060936] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.926 [2024-04-27 00:02:47.060977] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.926 [2024-04-27 00:02:47.060985] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.926 [2024-04-27 00:02:47.060992] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.926 [2024-04-27 00:02:47.060997] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.926 [2024-04-27 00:02:47.061020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.496 00:02:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:17.496 00:02:47 -- common/autotest_common.sh@850 -- # return 0 00:18:17.496 00:02:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:17.496 00:02:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:17.496 00:02:47 -- common/autotest_common.sh@10 -- # set +x 00:18:17.756 00:02:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.757 00:02:47 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ApF9KQHZmo 00:18:17.757 00:02:47 -- target/tls.sh@49 -- # local key=/tmp/tmp.ApF9KQHZmo 00:18:17.757 00:02:47 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:17.757 [2024-04-27 00:02:47.867504] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.757 00:02:47 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:18.017 00:02:48 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:18.017 [2024-04-27 00:02:48.172260] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.017 [2024-04-27 00:02:48.172484] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.017 00:02:48 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:18.277 malloc0 00:18:18.277 00:02:48 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:18.277 00:02:48 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ApF9KQHZmo 00:18:18.538 [2024-04-27 00:02:48.616157] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:18.538 00:02:48 -- target/tls.sh@188 -- # bdevperf_pid=412052 00:18:18.538 00:02:48 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.538 00:02:48 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.538 00:02:48 -- target/tls.sh@191 -- # waitforlisten 412052 /var/tmp/bdevperf.sock 00:18:18.538 00:02:48 -- common/autotest_common.sh@817 -- # '[' -z 412052 ']' 00:18:18.538 00:02:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.538 00:02:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:18.538 00:02:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.538 00:02:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:18.538 00:02:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.538 [2024-04-27 00:02:48.684915] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:18.538 [2024-04-27 00:02:48.684983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412052 ] 00:18:18.538 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.538 [2024-04-27 00:02:48.735006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.798 [2024-04-27 00:02:48.786567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.369 00:02:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:19.369 00:02:49 -- common/autotest_common.sh@850 -- # return 0 00:18:19.369 00:02:49 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ApF9KQHZmo 00:18:19.369 [2024-04-27 00:02:49.579334] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.369 [2024-04-27 00:02:49.579390] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:19.631 TLSTESTn1 00:18:19.631 00:02:49 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:19.893 00:02:49 -- target/tls.sh@196 -- # tgtconf='{ 00:18:19.893 "subsystems": [ 00:18:19.893 { 00:18:19.893 "subsystem": "keyring", 00:18:19.893 "config": [] 00:18:19.893 }, 00:18:19.893 { 00:18:19.893 "subsystem": "iobuf", 00:18:19.893 "config": [ 00:18:19.893 { 00:18:19.893 "method": "iobuf_set_options", 00:18:19.893 "params": { 00:18:19.893 "small_pool_count": 8192, 00:18:19.893 "large_pool_count": 1024, 00:18:19.893 "small_bufsize": 8192, 00:18:19.893 "large_bufsize": 135168 00:18:19.893 } 00:18:19.893 } 00:18:19.893 ] 00:18:19.893 }, 00:18:19.893 { 00:18:19.893 "subsystem": "sock", 00:18:19.893 "config": [ 00:18:19.893 { 00:18:19.893 "method": "sock_impl_set_options", 00:18:19.893 "params": { 00:18:19.893 "impl_name": "posix", 00:18:19.893 "recv_buf_size": 2097152, 00:18:19.893 "send_buf_size": 2097152, 00:18:19.893 "enable_recv_pipe": true, 00:18:19.893 "enable_quickack": false, 00:18:19.893 "enable_placement_id": 0, 00:18:19.893 "enable_zerocopy_send_server": true, 00:18:19.893 "enable_zerocopy_send_client": false, 00:18:19.893 "zerocopy_threshold": 0, 00:18:19.893 "tls_version": 0, 00:18:19.893 "enable_ktls": false 00:18:19.893 } 00:18:19.893 }, 00:18:19.893 { 00:18:19.893 "method": "sock_impl_set_options", 00:18:19.893 "params": { 00:18:19.893 "impl_name": "ssl", 00:18:19.893 "recv_buf_size": 4096, 00:18:19.893 "send_buf_size": 4096, 00:18:19.893 "enable_recv_pipe": true, 00:18:19.893 "enable_quickack": false, 00:18:19.893 "enable_placement_id": 0, 00:18:19.893 "enable_zerocopy_send_server": true, 00:18:19.893 "enable_zerocopy_send_client": false, 00:18:19.893 "zerocopy_threshold": 0, 00:18:19.893 "tls_version": 0, 00:18:19.893 "enable_ktls": false 00:18:19.893 } 00:18:19.893 } 00:18:19.893 ] 00:18:19.893 }, 00:18:19.893 { 00:18:19.893 "subsystem": "vmd", 00:18:19.893 "config": [] 00:18:19.893 }, 00:18:19.893 { 00:18:19.893 "subsystem": "accel", 00:18:19.893 "config": [ 00:18:19.893 { 00:18:19.893 "method": "accel_set_options", 00:18:19.893 "params": { 00:18:19.893 "small_cache_size": 128, 00:18:19.893 "large_cache_size": 16, 00:18:19.893 "task_count": 2048, 00:18:19.893 "sequence_count": 2048, 00:18:19.893 "buf_count": 2048 00:18:19.893 } 00:18:19.893 } 00:18:19.893 ] 00:18:19.893 }, 00:18:19.893 { 00:18:19.893 "subsystem": "bdev", 00:18:19.893 "config": [ 00:18:19.893 { 00:18:19.893 "method": "bdev_set_options", 00:18:19.893 "params": { 00:18:19.893 "bdev_io_pool_size": 65535, 00:18:19.893 "bdev_io_cache_size": 256, 00:18:19.893 "bdev_auto_examine": true, 00:18:19.893 "iobuf_small_cache_size": 128, 00:18:19.893 "iobuf_large_cache_size": 16 00:18:19.893 } 00:18:19.893 }, 00:18:19.893 { 00:18:19.893 "method": "bdev_raid_set_options", 00:18:19.893 "params": { 00:18:19.893 "process_window_size_kb": 1024 00:18:19.893 } 00:18:19.893 }, 00:18:19.893 { 00:18:19.893 "method": "bdev_iscsi_set_options", 00:18:19.893 "params": { 00:18:19.893 "timeout_sec": 30 00:18:19.893 } 00:18:19.893 }, 00:18:19.893 { 00:18:19.893 "method": "bdev_nvme_set_options", 00:18:19.893 "params": { 00:18:19.893 "action_on_timeout": "none", 00:18:19.893 "timeout_us": 0, 00:18:19.893 "timeout_admin_us": 0, 00:18:19.893 "keep_alive_timeout_ms": 10000, 00:18:19.893 "arbitration_burst": 0, 00:18:19.893 "low_priority_weight": 0, 00:18:19.893 "medium_priority_weight": 0, 00:18:19.893 "high_priority_weight": 0, 00:18:19.893 "nvme_adminq_poll_period_us": 10000, 00:18:19.893 "nvme_ioq_poll_period_us": 0, 00:18:19.893 "io_queue_requests": 0, 00:18:19.893 "delay_cmd_submit": true, 00:18:19.893 "transport_retry_count": 4, 00:18:19.893 "bdev_retry_count": 3, 00:18:19.893 "transport_ack_timeout": 0, 00:18:19.893 "ctrlr_loss_timeout_sec": 0, 00:18:19.893 "reconnect_delay_sec": 0, 00:18:19.893 "fast_io_fail_timeout_sec": 0, 00:18:19.893 "disable_auto_failback": false, 00:18:19.893 "generate_uuids": false, 00:18:19.893 "transport_tos": 0, 00:18:19.893 "nvme_error_stat": false, 00:18:19.893 "rdma_srq_size": 0, 00:18:19.893 "io_path_stat": false, 00:18:19.893 "allow_accel_sequence": false, 00:18:19.893 "rdma_max_cq_size": 0, 00:18:19.893 "rdma_cm_event_timeout_ms": 0, 00:18:19.893 "dhchap_digests": [ 00:18:19.893 "sha256", 00:18:19.893 "sha384", 00:18:19.893 "sha512" 00:18:19.893 ], 00:18:19.893 "dhchap_dhgroups": [ 00:18:19.893 "null", 00:18:19.893 "ffdhe2048", 00:18:19.893 "ffdhe3072", 00:18:19.893 "ffdhe4096", 00:18:19.893 "ffdhe6144", 00:18:19.893 "ffdhe8192" 00:18:19.893 ] 00:18:19.893 } 00:18:19.893 }, 00:18:19.894 { 00:18:19.894 "method": "bdev_nvme_set_hotplug", 00:18:19.894 "params": { 00:18:19.894 "period_us": 100000, 00:18:19.894 "enable": false 00:18:19.894 } 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "method": "bdev_malloc_create", 00:18:19.894 "params": { 00:18:19.894 "name": "malloc0", 00:18:19.894 "num_blocks": 8192, 00:18:19.894 "block_size": 4096, 00:18:19.894 "physical_block_size": 4096, 00:18:19.894 "uuid": "115f0105-706d-4e77-97b7-979fcdfcc74b", 00:18:19.894 "optimal_io_boundary": 0 00:18:19.894 } 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "method": "bdev_wait_for_examine" 00:18:19.894 } 00:18:19.894 ] 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "subsystem": "nbd", 00:18:19.894 "config": [] 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "subsystem": "scheduler", 00:18:19.894 "config": [ 00:18:19.894 { 00:18:19.894 "method": "framework_set_scheduler", 00:18:19.894 "params": { 00:18:19.894 "name": "static" 00:18:19.894 } 00:18:19.894 } 00:18:19.894 ] 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "subsystem": "nvmf", 00:18:19.894 "config": [ 00:18:19.894 { 00:18:19.894 "method": "nvmf_set_config", 00:18:19.894 "params": { 00:18:19.894 "discovery_filter": "match_any", 00:18:19.894 "admin_cmd_passthru": { 00:18:19.894 "identify_ctrlr": false 00:18:19.894 } 00:18:19.894 } 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "method": "nvmf_set_max_subsystems", 00:18:19.894 "params": { 00:18:19.894 "max_subsystems": 1024 00:18:19.894 } 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "method": "nvmf_set_crdt", 00:18:19.894 "params": { 00:18:19.894 "crdt1": 0, 00:18:19.894 "crdt2": 0, 00:18:19.894 "crdt3": 0 00:18:19.894 } 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "method": "nvmf_create_transport", 00:18:19.894 "params": { 00:18:19.894 "trtype": "TCP", 00:18:19.894 "max_queue_depth": 128, 00:18:19.894 "max_io_qpairs_per_ctrlr": 127, 00:18:19.894 "in_capsule_data_size": 4096, 00:18:19.894 "max_io_size": 131072, 00:18:19.894 "io_unit_size": 131072, 00:18:19.894 "max_aq_depth": 128, 00:18:19.894 "num_shared_buffers": 511, 00:18:19.894 "buf_cache_size": 4294967295, 00:18:19.894 "dif_insert_or_strip": false, 00:18:19.894 "zcopy": false, 00:18:19.894 "c2h_success": false, 00:18:19.894 "sock_priority": 0, 00:18:19.894 "abort_timeout_sec": 1, 00:18:19.894 "ack_timeout": 0, 00:18:19.894 "data_wr_pool_size": 0 00:18:19.894 } 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "method": "nvmf_create_subsystem", 00:18:19.894 "params": { 00:18:19.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.894 "allow_any_host": false, 00:18:19.894 "serial_number": "SPDK00000000000001", 00:18:19.894 "model_number": "SPDK bdev Controller", 00:18:19.894 "max_namespaces": 10, 00:18:19.894 "min_cntlid": 1, 00:18:19.894 "max_cntlid": 65519, 00:18:19.894 "ana_reporting": false 00:18:19.894 } 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "method": "nvmf_subsystem_add_host", 00:18:19.894 "params": { 00:18:19.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.894 "host": "nqn.2016-06.io.spdk:host1", 00:18:19.894 "psk": "/tmp/tmp.ApF9KQHZmo" 00:18:19.894 } 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "method": "nvmf_subsystem_add_ns", 00:18:19.894 "params": { 00:18:19.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.894 "namespace": { 00:18:19.894 "nsid": 1, 00:18:19.894 "bdev_name": "malloc0", 00:18:19.894 "nguid": "115F0105706D4E7797B7979FCDFCC74B", 00:18:19.894 "uuid": "115f0105-706d-4e77-97b7-979fcdfcc74b", 00:18:19.894 "no_auto_visible": false 00:18:19.894 } 00:18:19.894 } 00:18:19.894 }, 00:18:19.894 { 00:18:19.894 "method": "nvmf_subsystem_add_listener", 00:18:19.894 "params": { 00:18:19.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.894 "listen_address": { 00:18:19.894 "trtype": "TCP", 00:18:19.894 "adrfam": "IPv4", 00:18:19.894 "traddr": "10.0.0.2", 00:18:19.894 "trsvcid": "4420" 00:18:19.894 }, 00:18:19.894 "secure_channel": true 00:18:19.894 } 00:18:19.894 } 00:18:19.894 ] 00:18:19.894 } 00:18:19.894 ] 00:18:19.894 }' 00:18:19.894 00:02:49 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:20.156 00:02:50 -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:20.156 "subsystems": [ 00:18:20.156 { 00:18:20.156 "subsystem": "keyring", 00:18:20.156 "config": [] 00:18:20.156 }, 00:18:20.156 { 00:18:20.156 "subsystem": "iobuf", 00:18:20.156 "config": [ 00:18:20.156 { 00:18:20.156 "method": "iobuf_set_options", 00:18:20.156 "params": { 00:18:20.156 "small_pool_count": 8192, 00:18:20.156 "large_pool_count": 1024, 00:18:20.156 "small_bufsize": 8192, 00:18:20.156 "large_bufsize": 135168 00:18:20.156 } 00:18:20.156 } 00:18:20.156 ] 00:18:20.156 }, 00:18:20.156 { 00:18:20.156 "subsystem": "sock", 00:18:20.156 "config": [ 00:18:20.156 { 00:18:20.156 "method": "sock_impl_set_options", 00:18:20.156 "params": { 00:18:20.156 "impl_name": "posix", 00:18:20.156 "recv_buf_size": 2097152, 00:18:20.156 "send_buf_size": 2097152, 00:18:20.156 "enable_recv_pipe": true, 00:18:20.156 "enable_quickack": false, 00:18:20.156 "enable_placement_id": 0, 00:18:20.156 "enable_zerocopy_send_server": true, 00:18:20.156 "enable_zerocopy_send_client": false, 00:18:20.156 "zerocopy_threshold": 0, 00:18:20.156 "tls_version": 0, 00:18:20.156 "enable_ktls": false 00:18:20.156 } 00:18:20.156 }, 00:18:20.156 { 00:18:20.156 "method": "sock_impl_set_options", 00:18:20.156 "params": { 00:18:20.156 "impl_name": "ssl", 00:18:20.156 "recv_buf_size": 4096, 00:18:20.156 "send_buf_size": 4096, 00:18:20.156 "enable_recv_pipe": true, 00:18:20.156 "enable_quickack": false, 00:18:20.156 "enable_placement_id": 0, 00:18:20.156 "enable_zerocopy_send_server": true, 00:18:20.156 "enable_zerocopy_send_client": false, 00:18:20.156 "zerocopy_threshold": 0, 00:18:20.156 "tls_version": 0, 00:18:20.156 "enable_ktls": false 00:18:20.156 } 00:18:20.156 } 00:18:20.156 ] 00:18:20.156 }, 00:18:20.156 { 00:18:20.156 "subsystem": "vmd", 00:18:20.156 "config": [] 00:18:20.156 }, 00:18:20.156 { 00:18:20.156 "subsystem": "accel", 00:18:20.156 "config": [ 00:18:20.156 { 00:18:20.156 "method": "accel_set_options", 00:18:20.156 "params": { 00:18:20.156 "small_cache_size": 128, 00:18:20.156 "large_cache_size": 16, 00:18:20.156 "task_count": 2048, 00:18:20.156 "sequence_count": 2048, 00:18:20.156 "buf_count": 2048 00:18:20.156 } 00:18:20.156 } 00:18:20.156 ] 00:18:20.156 }, 00:18:20.156 { 00:18:20.156 "subsystem": "bdev", 00:18:20.156 "config": [ 00:18:20.156 { 00:18:20.156 "method": "bdev_set_options", 00:18:20.156 "params": { 00:18:20.156 "bdev_io_pool_size": 65535, 00:18:20.156 "bdev_io_cache_size": 256, 00:18:20.156 "bdev_auto_examine": true, 00:18:20.156 "iobuf_small_cache_size": 128, 00:18:20.156 "iobuf_large_cache_size": 16 00:18:20.156 } 00:18:20.156 }, 00:18:20.156 { 00:18:20.156 "method": "bdev_raid_set_options", 00:18:20.156 "params": { 00:18:20.156 "process_window_size_kb": 1024 00:18:20.156 } 00:18:20.156 }, 00:18:20.156 { 00:18:20.156 "method": "bdev_iscsi_set_options", 00:18:20.156 "params": { 00:18:20.156 "timeout_sec": 30 00:18:20.156 } 00:18:20.156 }, 00:18:20.156 { 00:18:20.156 "method": "bdev_nvme_set_options", 00:18:20.156 "params": { 00:18:20.156 "action_on_timeout": "none", 00:18:20.156 "timeout_us": 0, 00:18:20.156 "timeout_admin_us": 0, 00:18:20.156 "keep_alive_timeout_ms": 10000, 00:18:20.156 "arbitration_burst": 0, 00:18:20.156 "low_priority_weight": 0, 00:18:20.156 "medium_priority_weight": 0, 00:18:20.156 "high_priority_weight": 0, 00:18:20.156 "nvme_adminq_poll_period_us": 10000, 00:18:20.156 "nvme_ioq_poll_period_us": 0, 00:18:20.156 "io_queue_requests": 512, 00:18:20.156 "delay_cmd_submit": true, 00:18:20.156 "transport_retry_count": 4, 00:18:20.156 "bdev_retry_count": 3, 00:18:20.156 "transport_ack_timeout": 0, 00:18:20.156 "ctrlr_loss_timeout_sec": 0, 00:18:20.156 "reconnect_delay_sec": 0, 00:18:20.156 "fast_io_fail_timeout_sec": 0, 00:18:20.156 "disable_auto_failback": false, 00:18:20.156 "generate_uuids": false, 00:18:20.156 "transport_tos": 0, 00:18:20.156 "nvme_error_stat": false, 00:18:20.156 "rdma_srq_size": 0, 00:18:20.156 "io_path_stat": false, 00:18:20.156 "allow_accel_sequence": false, 00:18:20.156 "rdma_max_cq_size": 0, 00:18:20.157 "rdma_cm_event_timeout_ms": 0, 00:18:20.157 "dhchap_digests": [ 00:18:20.157 "sha256", 00:18:20.157 "sha384", 00:18:20.157 "sha512" 00:18:20.157 ], 00:18:20.157 "dhchap_dhgroups": [ 00:18:20.157 "null", 00:18:20.157 "ffdhe2048", 00:18:20.157 "ffdhe3072", 00:18:20.157 "ffdhe4096", 00:18:20.157 "ffdhe6144", 00:18:20.157 "ffdhe8192" 00:18:20.157 ] 00:18:20.157 } 00:18:20.157 }, 00:18:20.157 { 00:18:20.157 "method": "bdev_nvme_attach_controller", 00:18:20.157 "params": { 00:18:20.157 "name": "TLSTEST", 00:18:20.157 "trtype": "TCP", 00:18:20.157 "adrfam": "IPv4", 00:18:20.157 "traddr": "10.0.0.2", 00:18:20.157 "trsvcid": "4420", 00:18:20.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.157 "prchk_reftag": false, 00:18:20.157 "prchk_guard": false, 00:18:20.157 "ctrlr_loss_timeout_sec": 0, 00:18:20.157 "reconnect_delay_sec": 0, 00:18:20.157 "fast_io_fail_timeout_sec": 0, 00:18:20.157 "psk": "/tmp/tmp.ApF9KQHZmo", 00:18:20.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.157 "hdgst": false, 00:18:20.157 "ddgst": false 00:18:20.157 } 00:18:20.157 }, 00:18:20.157 { 00:18:20.157 "method": "bdev_nvme_set_hotplug", 00:18:20.157 "params": { 00:18:20.157 "period_us": 100000, 00:18:20.157 "enable": false 00:18:20.157 } 00:18:20.157 }, 00:18:20.157 { 00:18:20.157 "method": "bdev_wait_for_examine" 00:18:20.157 } 00:18:20.157 ] 00:18:20.157 }, 00:18:20.157 { 00:18:20.157 "subsystem": "nbd", 00:18:20.157 "config": [] 00:18:20.157 } 00:18:20.157 ] 00:18:20.157 }' 00:18:20.157 00:02:50 -- target/tls.sh@199 -- # killprocess 412052 00:18:20.157 00:02:50 -- common/autotest_common.sh@936 -- # '[' -z 412052 ']' 00:18:20.157 00:02:50 -- common/autotest_common.sh@940 -- # kill -0 412052 00:18:20.157 00:02:50 -- common/autotest_common.sh@941 -- # uname 00:18:20.157 00:02:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:20.157 00:02:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 412052 00:18:20.157 00:02:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:20.157 00:02:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:20.157 00:02:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 412052' 00:18:20.157 killing process with pid 412052 00:18:20.157 00:02:50 -- common/autotest_common.sh@955 -- # kill 412052 00:18:20.157 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.157 00:18:20.157 Latency(us) 00:18:20.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.157 =================================================================================================================== 00:18:20.157 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.157 [2024-04-27 00:02:50.206279] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:20.157 00:02:50 -- common/autotest_common.sh@960 -- # wait 412052 00:18:20.157 00:02:50 -- target/tls.sh@200 -- # killprocess 411697 00:18:20.157 00:02:50 -- common/autotest_common.sh@936 -- # '[' -z 411697 ']' 00:18:20.157 00:02:50 -- common/autotest_common.sh@940 -- # kill -0 411697 00:18:20.157 00:02:50 -- common/autotest_common.sh@941 -- # uname 00:18:20.157 00:02:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:20.157 00:02:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 411697 00:18:20.157 00:02:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:20.157 00:02:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:20.157 00:02:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 411697' 00:18:20.157 killing process with pid 411697 00:18:20.157 00:02:50 -- common/autotest_common.sh@955 -- # kill 411697 00:18:20.157 [2024-04-27 00:02:50.374715] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:20.157 00:02:50 -- common/autotest_common.sh@960 -- # wait 411697 00:18:20.419 00:02:50 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:20.419 00:02:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:20.419 00:02:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:20.419 00:02:50 -- common/autotest_common.sh@10 -- # set +x 00:18:20.419 00:02:50 -- target/tls.sh@203 -- # echo '{ 00:18:20.419 "subsystems": [ 00:18:20.419 { 00:18:20.419 "subsystem": "keyring", 00:18:20.419 "config": [] 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "subsystem": "iobuf", 00:18:20.419 "config": [ 00:18:20.419 { 00:18:20.419 "method": "iobuf_set_options", 00:18:20.419 "params": { 00:18:20.419 "small_pool_count": 8192, 00:18:20.419 "large_pool_count": 1024, 00:18:20.419 "small_bufsize": 8192, 00:18:20.419 "large_bufsize": 135168 00:18:20.419 } 00:18:20.419 } 00:18:20.419 ] 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "subsystem": "sock", 00:18:20.419 "config": [ 00:18:20.419 { 00:18:20.419 "method": "sock_impl_set_options", 00:18:20.419 "params": { 00:18:20.419 "impl_name": "posix", 00:18:20.419 "recv_buf_size": 2097152, 00:18:20.419 "send_buf_size": 2097152, 00:18:20.419 "enable_recv_pipe": true, 00:18:20.419 "enable_quickack": false, 00:18:20.419 "enable_placement_id": 0, 00:18:20.419 "enable_zerocopy_send_server": true, 00:18:20.419 "enable_zerocopy_send_client": false, 00:18:20.419 "zerocopy_threshold": 0, 00:18:20.419 "tls_version": 0, 00:18:20.419 "enable_ktls": false 00:18:20.419 } 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "method": "sock_impl_set_options", 00:18:20.419 "params": { 00:18:20.419 "impl_name": "ssl", 00:18:20.419 "recv_buf_size": 4096, 00:18:20.419 "send_buf_size": 4096, 00:18:20.419 "enable_recv_pipe": true, 00:18:20.419 "enable_quickack": false, 00:18:20.419 "enable_placement_id": 0, 00:18:20.419 "enable_zerocopy_send_server": true, 00:18:20.419 "enable_zerocopy_send_client": false, 00:18:20.419 "zerocopy_threshold": 0, 00:18:20.419 "tls_version": 0, 00:18:20.419 "enable_ktls": false 00:18:20.419 } 00:18:20.419 } 00:18:20.419 ] 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "subsystem": "vmd", 00:18:20.419 "config": [] 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "subsystem": "accel", 00:18:20.419 "config": [ 00:18:20.419 { 00:18:20.419 "method": "accel_set_options", 00:18:20.419 "params": { 00:18:20.419 "small_cache_size": 128, 00:18:20.419 "large_cache_size": 16, 00:18:20.419 "task_count": 2048, 00:18:20.419 "sequence_count": 2048, 00:18:20.419 "buf_count": 2048 00:18:20.419 } 00:18:20.419 } 00:18:20.419 ] 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "subsystem": "bdev", 00:18:20.419 "config": [ 00:18:20.419 { 00:18:20.419 "method": "bdev_set_options", 00:18:20.419 "params": { 00:18:20.419 "bdev_io_pool_size": 65535, 00:18:20.419 "bdev_io_cache_size": 256, 00:18:20.419 "bdev_auto_examine": true, 00:18:20.419 "iobuf_small_cache_size": 128, 00:18:20.419 "iobuf_large_cache_size": 16 00:18:20.419 } 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "method": "bdev_raid_set_options", 00:18:20.419 "params": { 00:18:20.419 "process_window_size_kb": 1024 00:18:20.419 } 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "method": "bdev_iscsi_set_options", 00:18:20.419 "params": { 00:18:20.419 "timeout_sec": 30 00:18:20.419 } 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "method": "bdev_nvme_set_options", 00:18:20.419 "params": { 00:18:20.419 "action_on_timeout": "none", 00:18:20.419 "timeout_us": 0, 00:18:20.419 "timeout_admin_us": 0, 00:18:20.419 "keep_alive_timeout_ms": 10000, 00:18:20.419 "arbitration_burst": 0, 00:18:20.419 "low_priority_weight": 0, 00:18:20.419 "medium_priority_weight": 0, 00:18:20.419 "high_priority_weight": 0, 00:18:20.419 "nvme_adminq_poll_period_us": 10000, 00:18:20.419 "nvme_ioq_poll_period_us": 0, 00:18:20.419 "io_queue_requests": 0, 00:18:20.419 "delay_cmd_submit": true, 00:18:20.419 "transport_retry_count": 4, 00:18:20.419 "bdev_retry_count": 3, 00:18:20.419 "transport_ack_timeout": 0, 00:18:20.419 "ctrlr_loss_timeout_sec": 0, 00:18:20.419 "reconnect_delay_sec": 0, 00:18:20.419 "fast_io_fail_timeout_sec": 0, 00:18:20.419 "disable_auto_failback": false, 00:18:20.419 "generate_uuids": false, 00:18:20.419 "transport_tos": 0, 00:18:20.419 "nvme_error_stat": false, 00:18:20.419 "rdma_srq_size": 0, 00:18:20.419 "io_path_stat": false, 00:18:20.419 "allow_accel_sequence": false, 00:18:20.419 "rdma_max_cq_size": 0, 00:18:20.419 "rdma_cm_event_timeout_ms": 0, 00:18:20.419 "dhchap_digests": [ 00:18:20.419 "sha256", 00:18:20.419 "sha384", 00:18:20.419 "sha512" 00:18:20.419 ], 00:18:20.419 "dhchap_dhgroups": [ 00:18:20.419 "null", 00:18:20.419 "ffdhe2048", 00:18:20.419 "ffdhe3072", 00:18:20.419 "ffdhe4096", 00:18:20.419 "ffdhe6144", 00:18:20.419 "ffdhe8192" 00:18:20.419 ] 00:18:20.419 } 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "method": "bdev_nvme_set_hotplug", 00:18:20.419 "params": { 00:18:20.419 "period_us": 100000, 00:18:20.419 "enable": false 00:18:20.419 } 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "method": "bdev_malloc_create", 00:18:20.419 "params": { 00:18:20.419 "name": "malloc0", 00:18:20.419 "num_blocks": 8192, 00:18:20.419 "block_size": 4096, 00:18:20.419 "physical_block_size": 4096, 00:18:20.419 "uuid": "115f0105-706d-4e77-97b7-979fcdfcc74b", 00:18:20.419 "optimal_io_boundary": 0 00:18:20.419 } 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "method": "bdev_wait_for_examine" 00:18:20.419 } 00:18:20.419 ] 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "subsystem": "nbd", 00:18:20.419 "config": [] 00:18:20.419 }, 00:18:20.419 { 00:18:20.419 "subsystem": "scheduler", 00:18:20.419 "config": [ 00:18:20.419 { 00:18:20.419 "method": "framework_set_scheduler", 00:18:20.419 "params": { 00:18:20.419 "name": "static" 00:18:20.419 } 00:18:20.419 } 00:18:20.419 ] 00:18:20.420 }, 00:18:20.420 { 00:18:20.420 "subsystem": "nvmf", 00:18:20.420 "config": [ 00:18:20.420 { 00:18:20.420 "method": "nvmf_set_config", 00:18:20.420 "params": { 00:18:20.420 "discovery_filter": "match_any", 00:18:20.420 "admin_cmd_passthru": { 00:18:20.420 "identify_ctrlr": false 00:18:20.420 } 00:18:20.420 } 00:18:20.420 }, 00:18:20.420 { 00:18:20.420 "method": "nvmf_set_max_subsystems", 00:18:20.420 "params": { 00:18:20.420 "max_subsystems": 1024 00:18:20.420 } 00:18:20.420 }, 00:18:20.420 { 00:18:20.420 "method": "nvmf_set_crdt", 00:18:20.420 "params": { 00:18:20.420 "crdt1": 0, 00:18:20.420 "crdt2": 0, 00:18:20.420 "crdt3": 0 00:18:20.420 } 00:18:20.420 }, 00:18:20.420 { 00:18:20.420 "method": "nvmf_create_transport", 00:18:20.420 "params": { 00:18:20.420 "trtype": "TCP", 00:18:20.420 "max_queue_depth": 128, 00:18:20.420 "max_io_qpairs_per_ctrlr": 127, 00:18:20.420 "in_capsule_data_size": 4096, 00:18:20.420 "max_io_size": 131072, 00:18:20.420 "io_unit_size": 131072, 00:18:20.420 "max_aq_depth": 128, 00:18:20.420 "num_shared_buffers": 511, 00:18:20.420 "buf_cache_size": 4294967295, 00:18:20.420 "dif_insert_or_strip": false, 00:18:20.420 "zcopy": false, 00:18:20.420 "c2h_success": false, 00:18:20.420 "sock_priority": 0, 00:18:20.420 "abort_timeout_sec": 1, 00:18:20.420 "ack_timeout": 0, 00:18:20.420 "data_wr_pool_size": 0 00:18:20.420 } 00:18:20.420 }, 00:18:20.420 { 00:18:20.420 "method": "nvmf_create_subsystem", 00:18:20.420 "params": { 00:18:20.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.420 "allow_any_host": false, 00:18:20.420 "serial_number": "SPDK00000000000001", 00:18:20.420 "model_number": "SPDK bdev Controller", 00:18:20.420 "max_namespaces": 10, 00:18:20.420 "min_cntlid": 1, 00:18:20.420 "max_cntlid": 65519, 00:18:20.420 "ana_reporting": false 00:18:20.420 } 00:18:20.420 }, 00:18:20.420 { 00:18:20.420 "method": "nvmf_subsystem_add_host", 00:18:20.420 "params": { 00:18:20.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.420 "host": "nqn.2016-06.io.spdk:host1", 00:18:20.420 "psk": "/tmp/tmp.ApF9KQHZmo" 00:18:20.420 } 00:18:20.420 }, 00:18:20.420 { 00:18:20.420 "method": "nvmf_subsystem_add_ns", 00:18:20.420 "params": { 00:18:20.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.420 "namespace": { 00:18:20.420 "nsid": 1, 00:18:20.420 "bdev_name": "malloc0", 00:18:20.420 "nguid": "115F0105706D4E7797B7979FCDFCC74B", 00:18:20.420 "uuid": "115f0105-706d-4e77-97b7-979fcdfcc74b", 00:18:20.420 "no_auto_visible": false 00:18:20.420 } 00:18:20.420 } 00:18:20.420 }, 00:18:20.420 { 00:18:20.420 "method": "nvmf_subsystem_add_listener", 00:18:20.420 "params": { 00:18:20.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.420 "listen_address": { 00:18:20.420 "trtype": "TCP", 00:18:20.420 "adrfam": "IPv4", 00:18:20.420 "traddr": "10.0.0.2", 00:18:20.420 "trsvcid": "4420" 00:18:20.420 }, 00:18:20.420 "secure_channel": true 00:18:20.420 } 00:18:20.420 } 00:18:20.420 ] 00:18:20.420 } 00:18:20.420 ] 00:18:20.420 }' 00:18:20.420 00:02:50 -- nvmf/common.sh@470 -- # nvmfpid=412411 00:18:20.420 00:02:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:20.420 00:02:50 -- nvmf/common.sh@471 -- # waitforlisten 412411 00:18:20.420 00:02:50 -- common/autotest_common.sh@817 -- # '[' -z 412411 ']' 00:18:20.420 00:02:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.420 00:02:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.420 00:02:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.420 00:02:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.420 00:02:50 -- common/autotest_common.sh@10 -- # set +x 00:18:20.420 [2024-04-27 00:02:50.572518] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:20.420 [2024-04-27 00:02:50.572596] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.420 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.682 [2024-04-27 00:02:50.644247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.682 [2024-04-27 00:02:50.709466] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.682 [2024-04-27 00:02:50.709507] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.682 [2024-04-27 00:02:50.709515] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.682 [2024-04-27 00:02:50.709521] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.682 [2024-04-27 00:02:50.709526] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.682 [2024-04-27 00:02:50.709583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.682 [2024-04-27 00:02:50.890778] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.943 [2024-04-27 00:02:50.906718] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:20.943 [2024-04-27 00:02:50.922775] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:20.943 [2024-04-27 00:02:50.933129] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.232 00:02:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.232 00:02:51 -- common/autotest_common.sh@850 -- # return 0 00:18:21.233 00:02:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:21.233 00:02:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:21.233 00:02:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.233 00:02:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.233 00:02:51 -- target/tls.sh@207 -- # bdevperf_pid=412562 00:18:21.233 00:02:51 -- target/tls.sh@208 -- # waitforlisten 412562 /var/tmp/bdevperf.sock 00:18:21.233 00:02:51 -- common/autotest_common.sh@817 -- # '[' -z 412562 ']' 00:18:21.233 00:02:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.233 00:02:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:21.233 00:02:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.233 00:02:51 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:21.233 00:02:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:21.233 00:02:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.233 00:02:51 -- target/tls.sh@204 -- # echo '{ 00:18:21.233 "subsystems": [ 00:18:21.233 { 00:18:21.233 "subsystem": "keyring", 00:18:21.233 "config": [] 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "subsystem": "iobuf", 00:18:21.233 "config": [ 00:18:21.233 { 00:18:21.233 "method": "iobuf_set_options", 00:18:21.233 "params": { 00:18:21.233 "small_pool_count": 8192, 00:18:21.233 "large_pool_count": 1024, 00:18:21.233 "small_bufsize": 8192, 00:18:21.233 "large_bufsize": 135168 00:18:21.233 } 00:18:21.233 } 00:18:21.233 ] 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "subsystem": "sock", 00:18:21.233 "config": [ 00:18:21.233 { 00:18:21.233 "method": "sock_impl_set_options", 00:18:21.233 "params": { 00:18:21.233 "impl_name": "posix", 00:18:21.233 "recv_buf_size": 2097152, 00:18:21.233 "send_buf_size": 2097152, 00:18:21.233 "enable_recv_pipe": true, 00:18:21.233 "enable_quickack": false, 00:18:21.233 "enable_placement_id": 0, 00:18:21.233 "enable_zerocopy_send_server": true, 00:18:21.233 "enable_zerocopy_send_client": false, 00:18:21.233 "zerocopy_threshold": 0, 00:18:21.233 "tls_version": 0, 00:18:21.233 "enable_ktls": false 00:18:21.233 } 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "method": "sock_impl_set_options", 00:18:21.233 "params": { 00:18:21.233 "impl_name": "ssl", 00:18:21.233 "recv_buf_size": 4096, 00:18:21.233 "send_buf_size": 4096, 00:18:21.233 "enable_recv_pipe": true, 00:18:21.233 "enable_quickack": false, 00:18:21.233 "enable_placement_id": 0, 00:18:21.233 "enable_zerocopy_send_server": true, 00:18:21.233 "enable_zerocopy_send_client": false, 00:18:21.233 "zerocopy_threshold": 0, 00:18:21.233 "tls_version": 0, 00:18:21.233 "enable_ktls": false 00:18:21.233 } 00:18:21.233 } 00:18:21.233 ] 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "subsystem": "vmd", 00:18:21.233 "config": [] 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "subsystem": "accel", 00:18:21.233 "config": [ 00:18:21.233 { 00:18:21.233 "method": "accel_set_options", 00:18:21.233 "params": { 00:18:21.233 "small_cache_size": 128, 00:18:21.233 "large_cache_size": 16, 00:18:21.233 "task_count": 2048, 00:18:21.233 "sequence_count": 2048, 00:18:21.233 "buf_count": 2048 00:18:21.233 } 00:18:21.233 } 00:18:21.233 ] 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "subsystem": "bdev", 00:18:21.233 "config": [ 00:18:21.233 { 00:18:21.233 "method": "bdev_set_options", 00:18:21.233 "params": { 00:18:21.233 "bdev_io_pool_size": 65535, 00:18:21.233 "bdev_io_cache_size": 256, 00:18:21.233 "bdev_auto_examine": true, 00:18:21.233 "iobuf_small_cache_size": 128, 00:18:21.233 "iobuf_large_cache_size": 16 00:18:21.233 } 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "method": "bdev_raid_set_options", 00:18:21.233 "params": { 00:18:21.233 "process_window_size_kb": 1024 00:18:21.233 } 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "method": "bdev_iscsi_set_options", 00:18:21.233 "params": { 00:18:21.233 "timeout_sec": 30 00:18:21.233 } 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "method": "bdev_nvme_set_options", 00:18:21.233 "params": { 00:18:21.233 "action_on_timeout": "none", 00:18:21.233 "timeout_us": 0, 00:18:21.233 "timeout_admin_us": 0, 00:18:21.233 "keep_alive_timeout_ms": 10000, 00:18:21.233 "arbitration_burst": 0, 00:18:21.233 "low_priority_weight": 0, 00:18:21.233 "medium_priority_weight": 0, 00:18:21.233 "high_priority_weight": 0, 00:18:21.233 "nvme_adminq_poll_period_us": 10000, 00:18:21.233 "nvme_ioq_poll_period_us": 0, 00:18:21.233 "io_queue_requests": 512, 00:18:21.233 "delay_cmd_submit": true, 00:18:21.233 "transport_retry_count": 4, 00:18:21.233 "bdev_retry_count": 3, 00:18:21.233 "transport_ack_timeout": 0, 00:18:21.233 "ctrlr_loss_timeout_sec": 0, 00:18:21.233 "reconnect_delay_sec": 0, 00:18:21.233 "fast_io_fail_timeout_sec": 0, 00:18:21.233 "disable_auto_failback": false, 00:18:21.233 "generate_uuids": false, 00:18:21.233 "transport_tos": 0, 00:18:21.233 "nvme_error_stat": false, 00:18:21.233 "rdma_srq_size": 0, 00:18:21.233 "io_path_stat": false, 00:18:21.233 "allow_accel_sequence": false, 00:18:21.233 "rdma_max_cq_size": 0, 00:18:21.233 "rdma_cm_event_timeout_ms": 0, 00:18:21.233 "dhchap_digests": [ 00:18:21.233 "sha256", 00:18:21.233 "sha384", 00:18:21.233 "sha512" 00:18:21.233 ], 00:18:21.233 "dhchap_dhgroups": [ 00:18:21.233 "null", 00:18:21.233 "ffdhe2048", 00:18:21.233 "ffdhe3072", 00:18:21.233 "ffdhe4096", 00:18:21.233 "ffdhe6144", 00:18:21.233 "ffdhe8192" 00:18:21.233 ] 00:18:21.233 } 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "method": "bdev_nvme_attach_controller", 00:18:21.233 "params": { 00:18:21.233 "name": "TLSTEST", 00:18:21.233 "trtype": "TCP", 00:18:21.233 "adrfam": "IPv4", 00:18:21.233 "traddr": "10.0.0.2", 00:18:21.233 "trsvcid": "4420", 00:18:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.233 "prchk_reftag": false, 00:18:21.233 "prchk_guard": false, 00:18:21.233 "ctrlr_loss_timeout_sec": 0, 00:18:21.233 "reconnect_delay_sec": 0, 00:18:21.233 "fast_io_fail_timeout_sec": 0, 00:18:21.233 "psk": "/tmp/tmp.ApF9KQHZmo", 00:18:21.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.233 "hdgst": false, 00:18:21.233 "ddgst": false 00:18:21.233 } 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "method": "bdev_nvme_set_hotplug", 00:18:21.233 "params": { 00:18:21.233 "period_us": 100000, 00:18:21.233 "enable": false 00:18:21.233 } 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "method": "bdev_wait_for_examine" 00:18:21.233 } 00:18:21.233 ] 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "subsystem": "nbd", 00:18:21.233 "config": [] 00:18:21.233 } 00:18:21.233 ] 00:18:21.233 }' 00:18:21.233 [2024-04-27 00:02:51.419564] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:21.233 [2024-04-27 00:02:51.419616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412562 ] 00:18:21.233 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.493 [2024-04-27 00:02:51.470390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.493 [2024-04-27 00:02:51.521985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.493 [2024-04-27 00:02:51.638593] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.494 [2024-04-27 00:02:51.638659] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:22.063 00:02:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:22.063 00:02:52 -- common/autotest_common.sh@850 -- # return 0 00:18:22.063 00:02:52 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:22.063 Running I/O for 10 seconds... 00:18:34.290 00:18:34.290 Latency(us) 00:18:34.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.290 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:34.290 Verification LBA range: start 0x0 length 0x2000 00:18:34.290 TLSTESTn1 : 10.03 3609.97 14.10 0.00 0.00 35388.65 5980.16 36263.25 00:18:34.290 =================================================================================================================== 00:18:34.290 Total : 3609.97 14.10 0.00 0.00 35388.65 5980.16 36263.25 00:18:34.290 0 00:18:34.290 00:03:02 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:34.290 00:03:02 -- target/tls.sh@214 -- # killprocess 412562 00:18:34.290 00:03:02 -- common/autotest_common.sh@936 -- # '[' -z 412562 ']' 00:18:34.290 00:03:02 -- common/autotest_common.sh@940 -- # kill -0 412562 00:18:34.290 00:03:02 -- common/autotest_common.sh@941 -- # uname 00:18:34.290 00:03:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:34.290 00:03:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 412562 00:18:34.290 00:03:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:34.290 00:03:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:34.290 00:03:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 412562' 00:18:34.290 killing process with pid 412562 00:18:34.290 00:03:02 -- common/autotest_common.sh@955 -- # kill 412562 00:18:34.290 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.290 00:18:34.290 Latency(us) 00:18:34.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.290 =================================================================================================================== 00:18:34.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.290 [2024-04-27 00:03:02.408853] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:34.290 00:03:02 -- common/autotest_common.sh@960 -- # wait 412562 00:18:34.290 00:03:02 -- target/tls.sh@215 -- # killprocess 412411 00:18:34.290 00:03:02 -- common/autotest_common.sh@936 -- # '[' -z 412411 ']' 00:18:34.290 00:03:02 -- common/autotest_common.sh@940 -- # kill -0 412411 00:18:34.290 00:03:02 -- common/autotest_common.sh@941 -- # uname 00:18:34.290 00:03:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:34.290 00:03:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 412411 00:18:34.290 00:03:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:34.290 00:03:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:34.290 00:03:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 412411' 00:18:34.290 killing process with pid 412411 00:18:34.290 00:03:02 -- common/autotest_common.sh@955 -- # kill 412411 00:18:34.290 [2024-04-27 00:03:02.577518] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:34.290 00:03:02 -- common/autotest_common.sh@960 -- # wait 412411 00:18:34.290 00:03:02 -- target/tls.sh@218 -- # nvmfappstart 00:18:34.290 00:03:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:34.290 00:03:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:34.290 00:03:02 -- common/autotest_common.sh@10 -- # set +x 00:18:34.290 00:03:02 -- nvmf/common.sh@470 -- # nvmfpid=414891 00:18:34.290 00:03:02 -- nvmf/common.sh@471 -- # waitforlisten 414891 00:18:34.290 00:03:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:34.290 00:03:02 -- common/autotest_common.sh@817 -- # '[' -z 414891 ']' 00:18:34.290 00:03:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.290 00:03:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:34.290 00:03:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.290 00:03:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:34.290 00:03:02 -- common/autotest_common.sh@10 -- # set +x 00:18:34.290 [2024-04-27 00:03:02.775320] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:34.290 [2024-04-27 00:03:02.775373] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.290 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.290 [2024-04-27 00:03:02.844164] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.290 [2024-04-27 00:03:02.913168] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.290 [2024-04-27 00:03:02.913208] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.290 [2024-04-27 00:03:02.913215] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.290 [2024-04-27 00:03:02.913222] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.290 [2024-04-27 00:03:02.913227] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.290 [2024-04-27 00:03:02.913254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.290 00:03:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:34.290 00:03:02 -- common/autotest_common.sh@850 -- # return 0 00:18:34.290 00:03:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:34.290 00:03:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:34.290 00:03:02 -- common/autotest_common.sh@10 -- # set +x 00:18:34.290 00:03:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.290 00:03:03 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ApF9KQHZmo 00:18:34.290 00:03:03 -- target/tls.sh@49 -- # local key=/tmp/tmp.ApF9KQHZmo 00:18:34.290 00:03:03 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:34.290 [2024-04-27 00:03:03.166572] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.290 00:03:03 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:34.290 00:03:03 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:34.290 [2024-04-27 00:03:03.455291] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.290 [2024-04-27 00:03:03.455511] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.290 00:03:03 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:34.290 malloc0 00:18:34.290 00:03:03 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:34.291 00:03:03 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ApF9KQHZmo 00:18:34.291 [2024-04-27 00:03:03.895295] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:34.291 00:03:03 -- target/tls.sh@222 -- # bdevperf_pid=415242 00:18:34.291 00:03:03 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.291 00:03:03 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:34.291 00:03:03 -- target/tls.sh@225 -- # waitforlisten 415242 /var/tmp/bdevperf.sock 00:18:34.291 00:03:03 -- common/autotest_common.sh@817 -- # '[' -z 415242 ']' 00:18:34.291 00:03:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.291 00:03:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:34.291 00:03:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.291 00:03:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:34.291 00:03:03 -- common/autotest_common.sh@10 -- # set +x 00:18:34.291 [2024-04-27 00:03:03.957015] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:34.291 [2024-04-27 00:03:03.957065] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415242 ] 00:18:34.291 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.291 [2024-04-27 00:03:04.015350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.291 [2024-04-27 00:03:04.079275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.550 00:03:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:34.550 00:03:04 -- common/autotest_common.sh@850 -- # return 0 00:18:34.550 00:03:04 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ApF9KQHZmo 00:18:34.811 00:03:04 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:34.811 [2024-04-27 00:03:05.010070] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.071 nvme0n1 00:18:35.071 00:03:05 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.071 Running I/O for 1 seconds... 00:18:36.012 00:18:36.012 Latency(us) 00:18:36.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.012 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:36.012 Verification LBA range: start 0x0 length 0x2000 00:18:36.012 nvme0n1 : 1.02 3339.27 13.04 0.00 0.00 37987.12 6198.61 46312.11 00:18:36.012 =================================================================================================================== 00:18:36.012 Total : 3339.27 13.04 0.00 0.00 37987.12 6198.61 46312.11 00:18:36.012 0 00:18:36.012 00:03:06 -- target/tls.sh@234 -- # killprocess 415242 00:18:36.012 00:03:06 -- common/autotest_common.sh@936 -- # '[' -z 415242 ']' 00:18:36.012 00:03:06 -- common/autotest_common.sh@940 -- # kill -0 415242 00:18:36.012 00:03:06 -- common/autotest_common.sh@941 -- # uname 00:18:36.012 00:03:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.012 00:03:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 415242 00:18:36.273 00:03:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:36.273 00:03:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:36.273 00:03:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 415242' 00:18:36.273 killing process with pid 415242 00:18:36.273 00:03:06 -- common/autotest_common.sh@955 -- # kill 415242 00:18:36.273 Received shutdown signal, test time was about 1.000000 seconds 00:18:36.273 00:18:36.273 Latency(us) 00:18:36.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.273 =================================================================================================================== 00:18:36.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.273 00:03:06 -- common/autotest_common.sh@960 -- # wait 415242 00:18:36.273 00:03:06 -- target/tls.sh@235 -- # killprocess 414891 00:18:36.273 00:03:06 -- common/autotest_common.sh@936 -- # '[' -z 414891 ']' 00:18:36.273 00:03:06 -- common/autotest_common.sh@940 -- # kill -0 414891 00:18:36.273 00:03:06 -- common/autotest_common.sh@941 -- # uname 00:18:36.273 00:03:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.273 00:03:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 414891 00:18:36.273 00:03:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:36.273 00:03:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:36.273 00:03:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 414891' 00:18:36.273 killing process with pid 414891 00:18:36.273 00:03:06 -- common/autotest_common.sh@955 -- # kill 414891 00:18:36.273 [2024-04-27 00:03:06.458801] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:36.273 00:03:06 -- common/autotest_common.sh@960 -- # wait 414891 00:18:36.534 00:03:06 -- target/tls.sh@238 -- # nvmfappstart 00:18:36.534 00:03:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:36.534 00:03:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:36.534 00:03:06 -- common/autotest_common.sh@10 -- # set +x 00:18:36.534 00:03:06 -- nvmf/common.sh@470 -- # nvmfpid=415608 00:18:36.534 00:03:06 -- nvmf/common.sh@471 -- # waitforlisten 415608 00:18:36.534 00:03:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:36.534 00:03:06 -- common/autotest_common.sh@817 -- # '[' -z 415608 ']' 00:18:36.534 00:03:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.534 00:03:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:36.534 00:03:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.534 00:03:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:36.534 00:03:06 -- common/autotest_common.sh@10 -- # set +x 00:18:36.534 [2024-04-27 00:03:06.654768] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:36.534 [2024-04-27 00:03:06.654857] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.534 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.534 [2024-04-27 00:03:06.724918] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.795 [2024-04-27 00:03:06.787921] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.795 [2024-04-27 00:03:06.787961] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.795 [2024-04-27 00:03:06.787969] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.795 [2024-04-27 00:03:06.787975] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.795 [2024-04-27 00:03:06.787980] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.795 [2024-04-27 00:03:06.787999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.366 00:03:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:37.366 00:03:07 -- common/autotest_common.sh@850 -- # return 0 00:18:37.366 00:03:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:37.366 00:03:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:37.366 00:03:07 -- common/autotest_common.sh@10 -- # set +x 00:18:37.366 00:03:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.366 00:03:07 -- target/tls.sh@239 -- # rpc_cmd 00:18:37.366 00:03:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.366 00:03:07 -- common/autotest_common.sh@10 -- # set +x 00:18:37.366 [2024-04-27 00:03:07.462916] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.366 malloc0 00:18:37.366 [2024-04-27 00:03:07.489645] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.366 [2024-04-27 00:03:07.489860] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.366 00:03:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.366 00:03:07 -- target/tls.sh@252 -- # bdevperf_pid=415953 00:18:37.366 00:03:07 -- target/tls.sh@254 -- # waitforlisten 415953 /var/tmp/bdevperf.sock 00:18:37.366 00:03:07 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:37.366 00:03:07 -- common/autotest_common.sh@817 -- # '[' -z 415953 ']' 00:18:37.366 00:03:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.366 00:03:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:37.366 00:03:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.366 00:03:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:37.366 00:03:07 -- common/autotest_common.sh@10 -- # set +x 00:18:37.366 [2024-04-27 00:03:07.567531] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:37.366 [2024-04-27 00:03:07.567581] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415953 ] 00:18:37.627 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.627 [2024-04-27 00:03:07.626211] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.627 [2024-04-27 00:03:07.689893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.198 00:03:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:38.198 00:03:08 -- common/autotest_common.sh@850 -- # return 0 00:18:38.198 00:03:08 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ApF9KQHZmo 00:18:38.458 00:03:08 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:38.458 [2024-04-27 00:03:08.604422] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.719 nvme0n1 00:18:38.719 00:03:08 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:38.719 Running I/O for 1 seconds... 00:18:39.660 00:18:39.660 Latency(us) 00:18:39.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.660 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:39.660 Verification LBA range: start 0x0 length 0x2000 00:18:39.660 nvme0n1 : 1.05 3266.81 12.76 0.00 0.00 38238.84 6144.00 50899.63 00:18:39.660 =================================================================================================================== 00:18:39.660 Total : 3266.81 12.76 0.00 0.00 38238.84 6144.00 50899.63 00:18:39.660 0 00:18:39.660 00:03:09 -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:39.660 00:03:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.660 00:03:09 -- common/autotest_common.sh@10 -- # set +x 00:18:39.922 00:03:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.922 00:03:09 -- target/tls.sh@263 -- # tgtcfg='{ 00:18:39.922 "subsystems": [ 00:18:39.922 { 00:18:39.922 "subsystem": "keyring", 00:18:39.922 "config": [ 00:18:39.922 { 00:18:39.922 "method": "keyring_file_add_key", 00:18:39.922 "params": { 00:18:39.922 "name": "key0", 00:18:39.922 "path": "/tmp/tmp.ApF9KQHZmo" 00:18:39.922 } 00:18:39.922 } 00:18:39.922 ] 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "subsystem": "iobuf", 00:18:39.922 "config": [ 00:18:39.922 { 00:18:39.922 "method": "iobuf_set_options", 00:18:39.922 "params": { 00:18:39.922 "small_pool_count": 8192, 00:18:39.922 "large_pool_count": 1024, 00:18:39.922 "small_bufsize": 8192, 00:18:39.922 "large_bufsize": 135168 00:18:39.922 } 00:18:39.922 } 00:18:39.922 ] 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "subsystem": "sock", 00:18:39.922 "config": [ 00:18:39.922 { 00:18:39.922 "method": "sock_impl_set_options", 00:18:39.922 "params": { 00:18:39.922 "impl_name": "posix", 00:18:39.922 "recv_buf_size": 2097152, 00:18:39.922 "send_buf_size": 2097152, 00:18:39.922 "enable_recv_pipe": true, 00:18:39.922 "enable_quickack": false, 00:18:39.922 "enable_placement_id": 0, 00:18:39.922 "enable_zerocopy_send_server": true, 00:18:39.922 "enable_zerocopy_send_client": false, 00:18:39.922 "zerocopy_threshold": 0, 00:18:39.922 "tls_version": 0, 00:18:39.922 "enable_ktls": false 00:18:39.922 } 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "method": "sock_impl_set_options", 00:18:39.922 "params": { 00:18:39.922 "impl_name": "ssl", 00:18:39.922 "recv_buf_size": 4096, 00:18:39.922 "send_buf_size": 4096, 00:18:39.922 "enable_recv_pipe": true, 00:18:39.922 "enable_quickack": false, 00:18:39.922 "enable_placement_id": 0, 00:18:39.922 "enable_zerocopy_send_server": true, 00:18:39.922 "enable_zerocopy_send_client": false, 00:18:39.922 "zerocopy_threshold": 0, 00:18:39.922 "tls_version": 0, 00:18:39.922 "enable_ktls": false 00:18:39.922 } 00:18:39.922 } 00:18:39.922 ] 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "subsystem": "vmd", 00:18:39.922 "config": [] 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "subsystem": "accel", 00:18:39.922 "config": [ 00:18:39.922 { 00:18:39.922 "method": "accel_set_options", 00:18:39.922 "params": { 00:18:39.922 "small_cache_size": 128, 00:18:39.922 "large_cache_size": 16, 00:18:39.922 "task_count": 2048, 00:18:39.922 "sequence_count": 2048, 00:18:39.922 "buf_count": 2048 00:18:39.922 } 00:18:39.922 } 00:18:39.922 ] 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "subsystem": "bdev", 00:18:39.922 "config": [ 00:18:39.922 { 00:18:39.922 "method": "bdev_set_options", 00:18:39.922 "params": { 00:18:39.922 "bdev_io_pool_size": 65535, 00:18:39.922 "bdev_io_cache_size": 256, 00:18:39.922 "bdev_auto_examine": true, 00:18:39.922 "iobuf_small_cache_size": 128, 00:18:39.922 "iobuf_large_cache_size": 16 00:18:39.922 } 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "method": "bdev_raid_set_options", 00:18:39.922 "params": { 00:18:39.922 "process_window_size_kb": 1024 00:18:39.922 } 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "method": "bdev_iscsi_set_options", 00:18:39.922 "params": { 00:18:39.922 "timeout_sec": 30 00:18:39.922 } 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "method": "bdev_nvme_set_options", 00:18:39.922 "params": { 00:18:39.922 "action_on_timeout": "none", 00:18:39.922 "timeout_us": 0, 00:18:39.922 "timeout_admin_us": 0, 00:18:39.922 "keep_alive_timeout_ms": 10000, 00:18:39.922 "arbitration_burst": 0, 00:18:39.922 "low_priority_weight": 0, 00:18:39.922 "medium_priority_weight": 0, 00:18:39.922 "high_priority_weight": 0, 00:18:39.922 "nvme_adminq_poll_period_us": 10000, 00:18:39.922 "nvme_ioq_poll_period_us": 0, 00:18:39.922 "io_queue_requests": 0, 00:18:39.922 "delay_cmd_submit": true, 00:18:39.922 "transport_retry_count": 4, 00:18:39.922 "bdev_retry_count": 3, 00:18:39.922 "transport_ack_timeout": 0, 00:18:39.922 "ctrlr_loss_timeout_sec": 0, 00:18:39.922 "reconnect_delay_sec": 0, 00:18:39.922 "fast_io_fail_timeout_sec": 0, 00:18:39.922 "disable_auto_failback": false, 00:18:39.922 "generate_uuids": false, 00:18:39.922 "transport_tos": 0, 00:18:39.922 "nvme_error_stat": false, 00:18:39.922 "rdma_srq_size": 0, 00:18:39.922 "io_path_stat": false, 00:18:39.922 "allow_accel_sequence": false, 00:18:39.922 "rdma_max_cq_size": 0, 00:18:39.922 "rdma_cm_event_timeout_ms": 0, 00:18:39.922 "dhchap_digests": [ 00:18:39.922 "sha256", 00:18:39.922 "sha384", 00:18:39.922 "sha512" 00:18:39.922 ], 00:18:39.922 "dhchap_dhgroups": [ 00:18:39.922 "null", 00:18:39.922 "ffdhe2048", 00:18:39.922 "ffdhe3072", 00:18:39.922 "ffdhe4096", 00:18:39.922 "ffdhe6144", 00:18:39.922 "ffdhe8192" 00:18:39.922 ] 00:18:39.922 } 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "method": "bdev_nvme_set_hotplug", 00:18:39.922 "params": { 00:18:39.922 "period_us": 100000, 00:18:39.922 "enable": false 00:18:39.922 } 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "method": "bdev_malloc_create", 00:18:39.922 "params": { 00:18:39.922 "name": "malloc0", 00:18:39.922 "num_blocks": 8192, 00:18:39.922 "block_size": 4096, 00:18:39.922 "physical_block_size": 4096, 00:18:39.922 "uuid": "7de333c9-a77a-426b-b579-4e4608faaeb7", 00:18:39.922 "optimal_io_boundary": 0 00:18:39.922 } 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "method": "bdev_wait_for_examine" 00:18:39.922 } 00:18:39.922 ] 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "subsystem": "nbd", 00:18:39.922 "config": [] 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "subsystem": "scheduler", 00:18:39.922 "config": [ 00:18:39.922 { 00:18:39.922 "method": "framework_set_scheduler", 00:18:39.922 "params": { 00:18:39.922 "name": "static" 00:18:39.922 } 00:18:39.922 } 00:18:39.922 ] 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "subsystem": "nvmf", 00:18:39.922 "config": [ 00:18:39.922 { 00:18:39.922 "method": "nvmf_set_config", 00:18:39.922 "params": { 00:18:39.922 "discovery_filter": "match_any", 00:18:39.922 "admin_cmd_passthru": { 00:18:39.922 "identify_ctrlr": false 00:18:39.922 } 00:18:39.922 } 00:18:39.922 }, 00:18:39.922 { 00:18:39.922 "method": "nvmf_set_max_subsystems", 00:18:39.923 "params": { 00:18:39.923 "max_subsystems": 1024 00:18:39.923 } 00:18:39.923 }, 00:18:39.923 { 00:18:39.923 "method": "nvmf_set_crdt", 00:18:39.923 "params": { 00:18:39.923 "crdt1": 0, 00:18:39.923 "crdt2": 0, 00:18:39.923 "crdt3": 0 00:18:39.923 } 00:18:39.923 }, 00:18:39.923 { 00:18:39.923 "method": "nvmf_create_transport", 00:18:39.923 "params": { 00:18:39.923 "trtype": "TCP", 00:18:39.923 "max_queue_depth": 128, 00:18:39.923 "max_io_qpairs_per_ctrlr": 127, 00:18:39.923 "in_capsule_data_size": 4096, 00:18:39.923 "max_io_size": 131072, 00:18:39.923 "io_unit_size": 131072, 00:18:39.923 "max_aq_depth": 128, 00:18:39.923 "num_shared_buffers": 511, 00:18:39.923 "buf_cache_size": 4294967295, 00:18:39.923 "dif_insert_or_strip": false, 00:18:39.923 "zcopy": false, 00:18:39.923 "c2h_success": false, 00:18:39.923 "sock_priority": 0, 00:18:39.923 "abort_timeout_sec": 1, 00:18:39.923 "ack_timeout": 0, 00:18:39.923 "data_wr_pool_size": 0 00:18:39.923 } 00:18:39.923 }, 00:18:39.923 { 00:18:39.923 "method": "nvmf_create_subsystem", 00:18:39.923 "params": { 00:18:39.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.923 "allow_any_host": false, 00:18:39.923 "serial_number": "00000000000000000000", 00:18:39.923 "model_number": "SPDK bdev Controller", 00:18:39.923 "max_namespaces": 32, 00:18:39.923 "min_cntlid": 1, 00:18:39.923 "max_cntlid": 65519, 00:18:39.923 "ana_reporting": false 00:18:39.923 } 00:18:39.923 }, 00:18:39.923 { 00:18:39.923 "method": "nvmf_subsystem_add_host", 00:18:39.923 "params": { 00:18:39.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.923 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.923 "psk": "key0" 00:18:39.923 } 00:18:39.923 }, 00:18:39.923 { 00:18:39.923 "method": "nvmf_subsystem_add_ns", 00:18:39.923 "params": { 00:18:39.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.923 "namespace": { 00:18:39.923 "nsid": 1, 00:18:39.923 "bdev_name": "malloc0", 00:18:39.923 "nguid": "7DE333C9A77A426BB5794E4608FAAEB7", 00:18:39.923 "uuid": "7de333c9-a77a-426b-b579-4e4608faaeb7", 00:18:39.923 "no_auto_visible": false 00:18:39.923 } 00:18:39.923 } 00:18:39.923 }, 00:18:39.923 { 00:18:39.923 "method": "nvmf_subsystem_add_listener", 00:18:39.923 "params": { 00:18:39.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.923 "listen_address": { 00:18:39.923 "trtype": "TCP", 00:18:39.923 "adrfam": "IPv4", 00:18:39.923 "traddr": "10.0.0.2", 00:18:39.923 "trsvcid": "4420" 00:18:39.923 }, 00:18:39.923 "secure_channel": true 00:18:39.923 } 00:18:39.923 } 00:18:39.923 ] 00:18:39.923 } 00:18:39.923 ] 00:18:39.923 }' 00:18:39.923 00:03:09 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:40.183 00:03:10 -- target/tls.sh@264 -- # bperfcfg='{ 00:18:40.183 "subsystems": [ 00:18:40.183 { 00:18:40.183 "subsystem": "keyring", 00:18:40.183 "config": [ 00:18:40.183 { 00:18:40.183 "method": "keyring_file_add_key", 00:18:40.183 "params": { 00:18:40.183 "name": "key0", 00:18:40.183 "path": "/tmp/tmp.ApF9KQHZmo" 00:18:40.183 } 00:18:40.183 } 00:18:40.183 ] 00:18:40.183 }, 00:18:40.183 { 00:18:40.183 "subsystem": "iobuf", 00:18:40.183 "config": [ 00:18:40.183 { 00:18:40.183 "method": "iobuf_set_options", 00:18:40.183 "params": { 00:18:40.183 "small_pool_count": 8192, 00:18:40.183 "large_pool_count": 1024, 00:18:40.183 "small_bufsize": 8192, 00:18:40.183 "large_bufsize": 135168 00:18:40.183 } 00:18:40.183 } 00:18:40.183 ] 00:18:40.183 }, 00:18:40.183 { 00:18:40.183 "subsystem": "sock", 00:18:40.183 "config": [ 00:18:40.183 { 00:18:40.183 "method": "sock_impl_set_options", 00:18:40.183 "params": { 00:18:40.183 "impl_name": "posix", 00:18:40.183 "recv_buf_size": 2097152, 00:18:40.184 "send_buf_size": 2097152, 00:18:40.184 "enable_recv_pipe": true, 00:18:40.184 "enable_quickack": false, 00:18:40.184 "enable_placement_id": 0, 00:18:40.184 "enable_zerocopy_send_server": true, 00:18:40.184 "enable_zerocopy_send_client": false, 00:18:40.184 "zerocopy_threshold": 0, 00:18:40.184 "tls_version": 0, 00:18:40.184 "enable_ktls": false 00:18:40.184 } 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "method": "sock_impl_set_options", 00:18:40.184 "params": { 00:18:40.184 "impl_name": "ssl", 00:18:40.184 "recv_buf_size": 4096, 00:18:40.184 "send_buf_size": 4096, 00:18:40.184 "enable_recv_pipe": true, 00:18:40.184 "enable_quickack": false, 00:18:40.184 "enable_placement_id": 0, 00:18:40.184 "enable_zerocopy_send_server": true, 00:18:40.184 "enable_zerocopy_send_client": false, 00:18:40.184 "zerocopy_threshold": 0, 00:18:40.184 "tls_version": 0, 00:18:40.184 "enable_ktls": false 00:18:40.184 } 00:18:40.184 } 00:18:40.184 ] 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "subsystem": "vmd", 00:18:40.184 "config": [] 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "subsystem": "accel", 00:18:40.184 "config": [ 00:18:40.184 { 00:18:40.184 "method": "accel_set_options", 00:18:40.184 "params": { 00:18:40.184 "small_cache_size": 128, 00:18:40.184 "large_cache_size": 16, 00:18:40.184 "task_count": 2048, 00:18:40.184 "sequence_count": 2048, 00:18:40.184 "buf_count": 2048 00:18:40.184 } 00:18:40.184 } 00:18:40.184 ] 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "subsystem": "bdev", 00:18:40.184 "config": [ 00:18:40.184 { 00:18:40.184 "method": "bdev_set_options", 00:18:40.184 "params": { 00:18:40.184 "bdev_io_pool_size": 65535, 00:18:40.184 "bdev_io_cache_size": 256, 00:18:40.184 "bdev_auto_examine": true, 00:18:40.184 "iobuf_small_cache_size": 128, 00:18:40.184 "iobuf_large_cache_size": 16 00:18:40.184 } 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "method": "bdev_raid_set_options", 00:18:40.184 "params": { 00:18:40.184 "process_window_size_kb": 1024 00:18:40.184 } 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "method": "bdev_iscsi_set_options", 00:18:40.184 "params": { 00:18:40.184 "timeout_sec": 30 00:18:40.184 } 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "method": "bdev_nvme_set_options", 00:18:40.184 "params": { 00:18:40.184 "action_on_timeout": "none", 00:18:40.184 "timeout_us": 0, 00:18:40.184 "timeout_admin_us": 0, 00:18:40.184 "keep_alive_timeout_ms": 10000, 00:18:40.184 "arbitration_burst": 0, 00:18:40.184 "low_priority_weight": 0, 00:18:40.184 "medium_priority_weight": 0, 00:18:40.184 "high_priority_weight": 0, 00:18:40.184 "nvme_adminq_poll_period_us": 10000, 00:18:40.184 "nvme_ioq_poll_period_us": 0, 00:18:40.184 "io_queue_requests": 512, 00:18:40.184 "delay_cmd_submit": true, 00:18:40.184 "transport_retry_count": 4, 00:18:40.184 "bdev_retry_count": 3, 00:18:40.184 "transport_ack_timeout": 0, 00:18:40.184 "ctrlr_loss_timeout_sec": 0, 00:18:40.184 "reconnect_delay_sec": 0, 00:18:40.184 "fast_io_fail_timeout_sec": 0, 00:18:40.184 "disable_auto_failback": false, 00:18:40.184 "generate_uuids": false, 00:18:40.184 "transport_tos": 0, 00:18:40.184 "nvme_error_stat": false, 00:18:40.184 "rdma_srq_size": 0, 00:18:40.184 "io_path_stat": false, 00:18:40.184 "allow_accel_sequence": false, 00:18:40.184 "rdma_max_cq_size": 0, 00:18:40.184 "rdma_cm_event_timeout_ms": 0, 00:18:40.184 "dhchap_digests": [ 00:18:40.184 "sha256", 00:18:40.184 "sha384", 00:18:40.184 "sha512" 00:18:40.184 ], 00:18:40.184 "dhchap_dhgroups": [ 00:18:40.184 "null", 00:18:40.184 "ffdhe2048", 00:18:40.184 "ffdhe3072", 00:18:40.184 "ffdhe4096", 00:18:40.184 "ffdhe6144", 00:18:40.184 "ffdhe8192" 00:18:40.184 ] 00:18:40.184 } 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "method": "bdev_nvme_attach_controller", 00:18:40.184 "params": { 00:18:40.184 "name": "nvme0", 00:18:40.184 "trtype": "TCP", 00:18:40.184 "adrfam": "IPv4", 00:18:40.184 "traddr": "10.0.0.2", 00:18:40.184 "trsvcid": "4420", 00:18:40.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.184 "prchk_reftag": false, 00:18:40.184 "prchk_guard": false, 00:18:40.184 "ctrlr_loss_timeout_sec": 0, 00:18:40.184 "reconnect_delay_sec": 0, 00:18:40.184 "fast_io_fail_timeout_sec": 0, 00:18:40.184 "psk": "key0", 00:18:40.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.184 "hdgst": false, 00:18:40.184 "ddgst": false 00:18:40.184 } 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "method": "bdev_nvme_set_hotplug", 00:18:40.184 "params": { 00:18:40.184 "period_us": 100000, 00:18:40.184 "enable": false 00:18:40.184 } 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "method": "bdev_enable_histogram", 00:18:40.184 "params": { 00:18:40.184 "name": "nvme0n1", 00:18:40.184 "enable": true 00:18:40.184 } 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "method": "bdev_wait_for_examine" 00:18:40.184 } 00:18:40.184 ] 00:18:40.184 }, 00:18:40.184 { 00:18:40.184 "subsystem": "nbd", 00:18:40.184 "config": [] 00:18:40.184 } 00:18:40.184 ] 00:18:40.184 }' 00:18:40.184 00:03:10 -- target/tls.sh@266 -- # killprocess 415953 00:18:40.184 00:03:10 -- common/autotest_common.sh@936 -- # '[' -z 415953 ']' 00:18:40.184 00:03:10 -- common/autotest_common.sh@940 -- # kill -0 415953 00:18:40.184 00:03:10 -- common/autotest_common.sh@941 -- # uname 00:18:40.184 00:03:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.184 00:03:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 415953 00:18:40.184 00:03:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:40.184 00:03:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:40.184 00:03:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 415953' 00:18:40.184 killing process with pid 415953 00:18:40.184 00:03:10 -- common/autotest_common.sh@955 -- # kill 415953 00:18:40.184 Received shutdown signal, test time was about 1.000000 seconds 00:18:40.184 00:18:40.184 Latency(us) 00:18:40.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.184 =================================================================================================================== 00:18:40.184 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.184 00:03:10 -- common/autotest_common.sh@960 -- # wait 415953 00:18:40.184 00:03:10 -- target/tls.sh@267 -- # killprocess 415608 00:18:40.184 00:03:10 -- common/autotest_common.sh@936 -- # '[' -z 415608 ']' 00:18:40.184 00:03:10 -- common/autotest_common.sh@940 -- # kill -0 415608 00:18:40.184 00:03:10 -- common/autotest_common.sh@941 -- # uname 00:18:40.184 00:03:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.184 00:03:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 415608 00:18:40.444 00:03:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:40.444 00:03:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:40.444 00:03:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 415608' 00:18:40.444 killing process with pid 415608 00:18:40.444 00:03:10 -- common/autotest_common.sh@955 -- # kill 415608 00:18:40.444 00:03:10 -- common/autotest_common.sh@960 -- # wait 415608 00:18:40.444 00:03:10 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:40.444 00:03:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:40.444 00:03:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:40.444 00:03:10 -- target/tls.sh@269 -- # echo '{ 00:18:40.444 "subsystems": [ 00:18:40.444 { 00:18:40.444 "subsystem": "keyring", 00:18:40.444 "config": [ 00:18:40.444 { 00:18:40.444 "method": "keyring_file_add_key", 00:18:40.444 "params": { 00:18:40.444 "name": "key0", 00:18:40.444 "path": "/tmp/tmp.ApF9KQHZmo" 00:18:40.444 } 00:18:40.444 } 00:18:40.444 ] 00:18:40.444 }, 00:18:40.444 { 00:18:40.444 "subsystem": "iobuf", 00:18:40.444 "config": [ 00:18:40.444 { 00:18:40.444 "method": "iobuf_set_options", 00:18:40.444 "params": { 00:18:40.444 "small_pool_count": 8192, 00:18:40.444 "large_pool_count": 1024, 00:18:40.444 "small_bufsize": 8192, 00:18:40.444 "large_bufsize": 135168 00:18:40.444 } 00:18:40.444 } 00:18:40.444 ] 00:18:40.444 }, 00:18:40.444 { 00:18:40.444 "subsystem": "sock", 00:18:40.444 "config": [ 00:18:40.444 { 00:18:40.444 "method": "sock_impl_set_options", 00:18:40.444 "params": { 00:18:40.444 "impl_name": "posix", 00:18:40.444 "recv_buf_size": 2097152, 00:18:40.444 "send_buf_size": 2097152, 00:18:40.444 "enable_recv_pipe": true, 00:18:40.444 "enable_quickack": false, 00:18:40.444 "enable_placement_id": 0, 00:18:40.444 "enable_zerocopy_send_server": true, 00:18:40.444 "enable_zerocopy_send_client": false, 00:18:40.444 "zerocopy_threshold": 0, 00:18:40.444 "tls_version": 0, 00:18:40.444 "enable_ktls": false 00:18:40.444 } 00:18:40.444 }, 00:18:40.444 { 00:18:40.444 "method": "sock_impl_set_options", 00:18:40.444 "params": { 00:18:40.444 "impl_name": "ssl", 00:18:40.444 "recv_buf_size": 4096, 00:18:40.444 "send_buf_size": 4096, 00:18:40.444 "enable_recv_pipe": true, 00:18:40.444 "enable_quickack": false, 00:18:40.444 "enable_placement_id": 0, 00:18:40.444 "enable_zerocopy_send_server": true, 00:18:40.444 "enable_zerocopy_send_client": false, 00:18:40.444 "zerocopy_threshold": 0, 00:18:40.444 "tls_version": 0, 00:18:40.444 "enable_ktls": false 00:18:40.444 } 00:18:40.444 } 00:18:40.444 ] 00:18:40.444 }, 00:18:40.444 { 00:18:40.444 "subsystem": "vmd", 00:18:40.444 "config": [] 00:18:40.444 }, 00:18:40.444 { 00:18:40.444 "subsystem": "accel", 00:18:40.444 "config": [ 00:18:40.445 { 00:18:40.445 "method": "accel_set_options", 00:18:40.445 "params": { 00:18:40.445 "small_cache_size": 128, 00:18:40.445 "large_cache_size": 16, 00:18:40.445 "task_count": 2048, 00:18:40.445 "sequence_count": 2048, 00:18:40.445 "buf_count": 2048 00:18:40.445 } 00:18:40.445 } 00:18:40.445 ] 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "subsystem": "bdev", 00:18:40.445 "config": [ 00:18:40.445 { 00:18:40.445 "method": "bdev_set_options", 00:18:40.445 "params": { 00:18:40.445 "bdev_io_pool_size": 65535, 00:18:40.445 "bdev_io_cache_size": 256, 00:18:40.445 "bdev_auto_examine": true, 00:18:40.445 "iobuf_small_cache_size": 128, 00:18:40.445 "iobuf_large_cache_size": 16 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "bdev_raid_set_options", 00:18:40.445 "params": { 00:18:40.445 "process_window_size_kb": 1024 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "bdev_iscsi_set_options", 00:18:40.445 "params": { 00:18:40.445 "timeout_sec": 30 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "bdev_nvme_set_options", 00:18:40.445 "params": { 00:18:40.445 "action_on_timeout": "none", 00:18:40.445 "timeout_us": 0, 00:18:40.445 "timeout_admin_us": 0, 00:18:40.445 "keep_alive_timeout_ms": 10000, 00:18:40.445 "arbitration_burst": 0, 00:18:40.445 "low_priority_weight": 0, 00:18:40.445 "medium_priority_weight": 0, 00:18:40.445 "high_priority_weight": 0, 00:18:40.445 "nvme_adminq_poll_period_us": 10000, 00:18:40.445 "nvme_ioq_poll_period_us": 0, 00:18:40.445 "io_queue_requests": 0, 00:18:40.445 "delay_cmd_submit": true, 00:18:40.445 "transport_retry_count": 4, 00:18:40.445 "bdev_retry_count": 3, 00:18:40.445 "transport_ack_timeout": 0, 00:18:40.445 "ctrlr_loss_timeout_sec": 0, 00:18:40.445 "reconnect_delay_sec": 0, 00:18:40.445 "fast_io_fail_timeout_sec": 0, 00:18:40.445 "disable_auto_failback": false, 00:18:40.445 "generate_uuids": false, 00:18:40.445 "transport_tos": 0, 00:18:40.445 "nvme_error_stat": false, 00:18:40.445 "rdma_srq_size": 0, 00:18:40.445 "io_path_stat": false, 00:18:40.445 "allow_accel_sequence": false, 00:18:40.445 "rdma_max_cq_size": 0, 00:18:40.445 "rdma_cm_event_timeout_ms": 0, 00:18:40.445 "dhchap_digests": [ 00:18:40.445 "sha256", 00:18:40.445 "sha384", 00:18:40.445 "sha512" 00:18:40.445 ], 00:18:40.445 "dhchap_dhgroups": [ 00:18:40.445 "null", 00:18:40.445 "ffdhe2048", 00:18:40.445 "ffdhe3072", 00:18:40.445 "ffdhe4096", 00:18:40.445 "ffdhe6144", 00:18:40.445 "ffdhe8192" 00:18:40.445 ] 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "bdev_nvme_set_hotplug", 00:18:40.445 "params": { 00:18:40.445 "period_us": 100000, 00:18:40.445 "enable": false 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "bdev_malloc_create", 00:18:40.445 "params": { 00:18:40.445 "name": "malloc0", 00:18:40.445 "num_blocks": 8192, 00:18:40.445 "block_size": 4096, 00:18:40.445 "physical_block_size": 4096, 00:18:40.445 "uuid": "7de333c9-a77a-426b-b579-4e4608faaeb7", 00:18:40.445 "optimal_io_boundary": 0 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "bdev_wait_for_examine" 00:18:40.445 } 00:18:40.445 ] 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "subsystem": "nbd", 00:18:40.445 "config": [] 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "subsystem": "scheduler", 00:18:40.445 "config": [ 00:18:40.445 { 00:18:40.445 "method": "framework_set_scheduler", 00:18:40.445 "params": { 00:18:40.445 "name": "static" 00:18:40.445 } 00:18:40.445 } 00:18:40.445 ] 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "subsystem": "nvmf", 00:18:40.445 "config": [ 00:18:40.445 { 00:18:40.445 "method": "nvmf_set_config", 00:18:40.445 "params": { 00:18:40.445 "discovery_filter": "match_any", 00:18:40.445 "admin_cmd_passthru": { 00:18:40.445 "identify_ctrlr": false 00:18:40.445 } 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "nvmf_set_max_subsystems", 00:18:40.445 "params": { 00:18:40.445 "max_subsystems": 1024 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "nvmf_set_crdt", 00:18:40.445 "params": { 00:18:40.445 "crdt1": 0, 00:18:40.445 "crdt2": 0, 00:18:40.445 "crdt3": 0 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "nvmf_create_transport", 00:18:40.445 "params": { 00:18:40.445 "trtype": "TCP", 00:18:40.445 "max_queue_depth": 128, 00:18:40.445 "max_io_qpairs_per_ctrlr": 127, 00:18:40.445 "in_capsule_data_size": 4096, 00:18:40.445 "max_io_size": 131072, 00:18:40.445 "io_unit_size": 131072, 00:18:40.445 "max_aq_depth": 128, 00:18:40.445 "num_shared_buffers": 511, 00:18:40.445 "buf_cache_size": 4294967295, 00:18:40.445 "dif_insert_or_strip": false, 00:18:40.445 "zcopy": false, 00:18:40.445 "c2h_success": false, 00:18:40.445 "sock_priority": 0, 00:18:40.445 "abort_timeout_sec": 1, 00:18:40.445 "ack_timeout": 0, 00:18:40.445 "data_wr_pool_size": 0 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "nvmf_create_subsystem", 00:18:40.445 "params": { 00:18:40.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.445 "allow_any_host": false, 00:18:40.445 "serial_number": "00000000000000000000", 00:18:40.445 "model_number": "SPDK bdev 00:03:10 -- common/autotest_common.sh@10 -- # set +x 00:18:40.445 Controller", 00:18:40.445 "max_namespaces": 32, 00:18:40.445 "min_cntlid": 1, 00:18:40.445 "max_cntlid": 65519, 00:18:40.445 "ana_reporting": false 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "nvmf_subsystem_add_host", 00:18:40.445 "params": { 00:18:40.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.445 "host": "nqn.2016-06.io.spdk:host1", 00:18:40.445 "psk": "key0" 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "nvmf_subsystem_add_ns", 00:18:40.445 "params": { 00:18:40.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.445 "namespace": { 00:18:40.445 "nsid": 1, 00:18:40.445 "bdev_name": "malloc0", 00:18:40.445 "nguid": "7DE333C9A77A426BB5794E4608FAAEB7", 00:18:40.445 "uuid": "7de333c9-a77a-426b-b579-4e4608faaeb7", 00:18:40.445 "no_auto_visible": false 00:18:40.445 } 00:18:40.445 } 00:18:40.445 }, 00:18:40.445 { 00:18:40.445 "method": "nvmf_subsystem_add_listener", 00:18:40.445 "params": { 00:18:40.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.445 "listen_address": { 00:18:40.445 "trtype": "TCP", 00:18:40.445 "adrfam": "IPv4", 00:18:40.445 "traddr": "10.0.0.2", 00:18:40.445 "trsvcid": "4420" 00:18:40.445 }, 00:18:40.445 "secure_channel": true 00:18:40.445 } 00:18:40.445 } 00:18:40.445 ] 00:18:40.445 } 00:18:40.445 ] 00:18:40.445 }' 00:18:40.445 00:03:10 -- nvmf/common.sh@470 -- # nvmfpid=417056 00:18:40.445 00:03:10 -- nvmf/common.sh@471 -- # waitforlisten 417056 00:18:40.445 00:03:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:40.445 00:03:10 -- common/autotest_common.sh@817 -- # '[' -z 417056 ']' 00:18:40.445 00:03:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.445 00:03:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:40.445 00:03:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.445 00:03:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:40.445 00:03:10 -- common/autotest_common.sh@10 -- # set +x 00:18:40.445 [2024-04-27 00:03:10.650258] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:40.446 [2024-04-27 00:03:10.650315] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.705 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.705 [2024-04-27 00:03:10.714592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.705 [2024-04-27 00:03:10.777520] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.705 [2024-04-27 00:03:10.777557] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.705 [2024-04-27 00:03:10.777564] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.705 [2024-04-27 00:03:10.777571] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.705 [2024-04-27 00:03:10.777576] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.705 [2024-04-27 00:03:10.777627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.965 [2024-04-27 00:03:10.967094] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.965 [2024-04-27 00:03:10.999101] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.965 [2024-04-27 00:03:11.008148] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.226 00:03:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:41.226 00:03:11 -- common/autotest_common.sh@850 -- # return 0 00:18:41.226 00:03:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:41.226 00:03:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:41.226 00:03:11 -- common/autotest_common.sh@10 -- # set +x 00:18:41.487 00:03:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.487 00:03:11 -- target/tls.sh@272 -- # bdevperf_pid=417128 00:18:41.487 00:03:11 -- target/tls.sh@273 -- # waitforlisten 417128 /var/tmp/bdevperf.sock 00:18:41.487 00:03:11 -- common/autotest_common.sh@817 -- # '[' -z 417128 ']' 00:18:41.487 00:03:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.487 00:03:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:41.487 00:03:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.487 00:03:11 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:41.487 00:03:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:41.487 00:03:11 -- common/autotest_common.sh@10 -- # set +x 00:18:41.487 00:03:11 -- target/tls.sh@270 -- # echo '{ 00:18:41.487 "subsystems": [ 00:18:41.487 { 00:18:41.487 "subsystem": "keyring", 00:18:41.487 "config": [ 00:18:41.487 { 00:18:41.487 "method": "keyring_file_add_key", 00:18:41.487 "params": { 00:18:41.487 "name": "key0", 00:18:41.487 "path": "/tmp/tmp.ApF9KQHZmo" 00:18:41.487 } 00:18:41.487 } 00:18:41.487 ] 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "subsystem": "iobuf", 00:18:41.487 "config": [ 00:18:41.487 { 00:18:41.487 "method": "iobuf_set_options", 00:18:41.487 "params": { 00:18:41.487 "small_pool_count": 8192, 00:18:41.487 "large_pool_count": 1024, 00:18:41.487 "small_bufsize": 8192, 00:18:41.487 "large_bufsize": 135168 00:18:41.487 } 00:18:41.487 } 00:18:41.487 ] 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "subsystem": "sock", 00:18:41.487 "config": [ 00:18:41.487 { 00:18:41.487 "method": "sock_impl_set_options", 00:18:41.487 "params": { 00:18:41.487 "impl_name": "posix", 00:18:41.487 "recv_buf_size": 2097152, 00:18:41.487 "send_buf_size": 2097152, 00:18:41.487 "enable_recv_pipe": true, 00:18:41.487 "enable_quickack": false, 00:18:41.487 "enable_placement_id": 0, 00:18:41.487 "enable_zerocopy_send_server": true, 00:18:41.487 "enable_zerocopy_send_client": false, 00:18:41.487 "zerocopy_threshold": 0, 00:18:41.487 "tls_version": 0, 00:18:41.487 "enable_ktls": false 00:18:41.487 } 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "method": "sock_impl_set_options", 00:18:41.487 "params": { 00:18:41.487 "impl_name": "ssl", 00:18:41.487 "recv_buf_size": 4096, 00:18:41.487 "send_buf_size": 4096, 00:18:41.487 "enable_recv_pipe": true, 00:18:41.487 "enable_quickack": false, 00:18:41.487 "enable_placement_id": 0, 00:18:41.487 "enable_zerocopy_send_server": true, 00:18:41.487 "enable_zerocopy_send_client": false, 00:18:41.487 "zerocopy_threshold": 0, 00:18:41.487 "tls_version": 0, 00:18:41.487 "enable_ktls": false 00:18:41.487 } 00:18:41.487 } 00:18:41.487 ] 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "subsystem": "vmd", 00:18:41.487 "config": [] 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "subsystem": "accel", 00:18:41.487 "config": [ 00:18:41.487 { 00:18:41.487 "method": "accel_set_options", 00:18:41.487 "params": { 00:18:41.487 "small_cache_size": 128, 00:18:41.487 "large_cache_size": 16, 00:18:41.487 "task_count": 2048, 00:18:41.487 "sequence_count": 2048, 00:18:41.487 "buf_count": 2048 00:18:41.487 } 00:18:41.487 } 00:18:41.487 ] 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "subsystem": "bdev", 00:18:41.487 "config": [ 00:18:41.487 { 00:18:41.487 "method": "bdev_set_options", 00:18:41.487 "params": { 00:18:41.487 "bdev_io_pool_size": 65535, 00:18:41.487 "bdev_io_cache_size": 256, 00:18:41.487 "bdev_auto_examine": true, 00:18:41.487 "iobuf_small_cache_size": 128, 00:18:41.487 "iobuf_large_cache_size": 16 00:18:41.487 } 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "method": "bdev_raid_set_options", 00:18:41.487 "params": { 00:18:41.487 "process_window_size_kb": 1024 00:18:41.487 } 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "method": "bdev_iscsi_set_options", 00:18:41.487 "params": { 00:18:41.487 "timeout_sec": 30 00:18:41.487 } 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "method": "bdev_nvme_set_options", 00:18:41.487 "params": { 00:18:41.487 "action_on_timeout": "none", 00:18:41.487 "timeout_us": 0, 00:18:41.487 "timeout_admin_us": 0, 00:18:41.487 "keep_alive_timeout_ms": 10000, 00:18:41.487 "arbitration_burst": 0, 00:18:41.487 "low_priority_weight": 0, 00:18:41.487 "medium_priority_weight": 0, 00:18:41.487 "high_priority_weight": 0, 00:18:41.487 "nvme_adminq_poll_period_us": 10000, 00:18:41.487 "nvme_ioq_poll_period_us": 0, 00:18:41.487 "io_queue_requests": 512, 00:18:41.487 "delay_cmd_submit": true, 00:18:41.487 "transport_retry_count": 4, 00:18:41.487 "bdev_retry_count": 3, 00:18:41.487 "transport_ack_timeout": 0, 00:18:41.487 "ctrlr_loss_timeout_sec": 0, 00:18:41.487 "reconnect_delay_sec": 0, 00:18:41.487 "fast_io_fail_timeout_sec": 0, 00:18:41.487 "disable_auto_failback": false, 00:18:41.487 "generate_uuids": false, 00:18:41.487 "transport_tos": 0, 00:18:41.487 "nvme_error_stat": false, 00:18:41.487 "rdma_srq_size": 0, 00:18:41.487 "io_path_stat": false, 00:18:41.487 "allow_accel_sequence": false, 00:18:41.487 "rdma_max_cq_size": 0, 00:18:41.487 "rdma_cm_event_timeout_ms": 0, 00:18:41.487 "dhchap_digests": [ 00:18:41.487 "sha256", 00:18:41.487 "sha384", 00:18:41.487 "sha512" 00:18:41.487 ], 00:18:41.487 "dhchap_dhgroups": [ 00:18:41.487 "null", 00:18:41.487 "ffdhe2048", 00:18:41.487 "ffdhe3072", 00:18:41.487 "ffdhe4096", 00:18:41.487 "ffdhe6144", 00:18:41.487 "ffdhe8192" 00:18:41.487 ] 00:18:41.487 } 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "method": "bdev_nvme_attach_controller", 00:18:41.487 "params": { 00:18:41.487 "name": "nvme0", 00:18:41.487 "trtype": "TCP", 00:18:41.487 "adrfam": "IPv4", 00:18:41.487 "traddr": "10.0.0.2", 00:18:41.487 "trsvcid": "4420", 00:18:41.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.487 "prchk_reftag": false, 00:18:41.487 "prchk_guard": false, 00:18:41.487 "ctrlr_loss_timeout_sec": 0, 00:18:41.487 "reconnect_delay_sec": 0, 00:18:41.487 "fast_io_fail_timeout_sec": 0, 00:18:41.487 "psk": "key0", 00:18:41.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.487 "hdgst": false, 00:18:41.487 "ddgst": false 00:18:41.487 } 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "method": "bdev_nvme_set_hotplug", 00:18:41.487 "params": { 00:18:41.487 "period_us": 100000, 00:18:41.487 "enable": false 00:18:41.487 } 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "method": "bdev_enable_histogram", 00:18:41.487 "params": { 00:18:41.487 "name": "nvme0n1", 00:18:41.487 "enable": true 00:18:41.487 } 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "method": "bdev_wait_for_examine" 00:18:41.487 } 00:18:41.487 ] 00:18:41.487 }, 00:18:41.487 { 00:18:41.487 "subsystem": "nbd", 00:18:41.487 "config": [] 00:18:41.487 } 00:18:41.487 ] 00:18:41.487 }' 00:18:41.487 [2024-04-27 00:03:11.498447] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:41.487 [2024-04-27 00:03:11.498500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417128 ] 00:18:41.487 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.487 [2024-04-27 00:03:11.558253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.488 [2024-04-27 00:03:11.621790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.748 [2024-04-27 00:03:11.752409] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.321 00:03:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:42.321 00:03:12 -- common/autotest_common.sh@850 -- # return 0 00:18:42.321 00:03:12 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:42.321 00:03:12 -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:42.321 00:03:12 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.321 00:03:12 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.321 Running I/O for 1 seconds... 00:18:43.709 00:18:43.709 Latency(us) 00:18:43.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.709 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:43.709 Verification LBA range: start 0x0 length 0x2000 00:18:43.709 nvme0n1 : 1.04 1976.36 7.72 0.00 0.00 63682.35 6171.31 86507.52 00:18:43.709 =================================================================================================================== 00:18:43.709 Total : 1976.36 7.72 0.00 0.00 63682.35 6171.31 86507.52 00:18:43.709 0 00:18:43.709 00:03:13 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:43.709 00:03:13 -- target/tls.sh@279 -- # cleanup 00:18:43.709 00:03:13 -- target/tls.sh@15 -- # process_shm --id 0 00:18:43.709 00:03:13 -- common/autotest_common.sh@794 -- # type=--id 00:18:43.709 00:03:13 -- common/autotest_common.sh@795 -- # id=0 00:18:43.709 00:03:13 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:43.709 00:03:13 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:43.709 00:03:13 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:43.709 00:03:13 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:43.709 00:03:13 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:43.709 00:03:13 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:43.709 nvmf_trace.0 00:18:43.709 00:03:13 -- common/autotest_common.sh@809 -- # return 0 00:18:43.709 00:03:13 -- target/tls.sh@16 -- # killprocess 417128 00:18:43.709 00:03:13 -- common/autotest_common.sh@936 -- # '[' -z 417128 ']' 00:18:43.709 00:03:13 -- common/autotest_common.sh@940 -- # kill -0 417128 00:18:43.709 00:03:13 -- common/autotest_common.sh@941 -- # uname 00:18:43.709 00:03:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:43.709 00:03:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 417128 00:18:43.709 00:03:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:43.709 00:03:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:43.709 00:03:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 417128' 00:18:43.709 killing process with pid 417128 00:18:43.709 00:03:13 -- common/autotest_common.sh@955 -- # kill 417128 00:18:43.709 Received shutdown signal, test time was about 1.000000 seconds 00:18:43.709 00:18:43.709 Latency(us) 00:18:43.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.709 =================================================================================================================== 00:18:43.709 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.709 00:03:13 -- common/autotest_common.sh@960 -- # wait 417128 00:18:43.709 00:03:13 -- target/tls.sh@17 -- # nvmftestfini 00:18:43.709 00:03:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:43.709 00:03:13 -- nvmf/common.sh@117 -- # sync 00:18:43.709 00:03:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.709 00:03:13 -- nvmf/common.sh@120 -- # set +e 00:18:43.709 00:03:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.709 00:03:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.709 rmmod nvme_tcp 00:18:43.709 rmmod nvme_fabrics 00:18:43.709 rmmod nvme_keyring 00:18:43.709 00:03:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.709 00:03:13 -- nvmf/common.sh@124 -- # set -e 00:18:43.709 00:03:13 -- nvmf/common.sh@125 -- # return 0 00:18:43.709 00:03:13 -- nvmf/common.sh@478 -- # '[' -n 417056 ']' 00:18:43.709 00:03:13 -- nvmf/common.sh@479 -- # killprocess 417056 00:18:43.709 00:03:13 -- common/autotest_common.sh@936 -- # '[' -z 417056 ']' 00:18:43.709 00:03:13 -- common/autotest_common.sh@940 -- # kill -0 417056 00:18:43.709 00:03:13 -- common/autotest_common.sh@941 -- # uname 00:18:43.709 00:03:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:43.709 00:03:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 417056 00:18:43.970 00:03:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:43.970 00:03:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:43.970 00:03:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 417056' 00:18:43.970 killing process with pid 417056 00:18:43.970 00:03:13 -- common/autotest_common.sh@955 -- # kill 417056 00:18:43.970 00:03:13 -- common/autotest_common.sh@960 -- # wait 417056 00:18:43.970 00:03:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:43.970 00:03:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:43.970 00:03:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:43.970 00:03:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.970 00:03:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.970 00:03:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.970 00:03:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.970 00:03:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.516 00:03:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.516 00:03:16 -- target/tls.sh@18 -- # rm -f /tmp/tmp.iV9dfyylrF /tmp/tmp.6QukJAt80H /tmp/tmp.ApF9KQHZmo 00:18:46.516 00:18:46.516 real 1m23.223s 00:18:46.516 user 2m5.679s 00:18:46.516 sys 0m26.934s 00:18:46.516 00:03:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:46.516 00:03:16 -- common/autotest_common.sh@10 -- # set +x 00:18:46.516 ************************************ 00:18:46.516 END TEST nvmf_tls 00:18:46.516 ************************************ 00:18:46.516 00:03:16 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:46.516 00:03:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:46.516 00:03:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.516 00:03:16 -- common/autotest_common.sh@10 -- # set +x 00:18:46.516 ************************************ 00:18:46.516 START TEST nvmf_fips 00:18:46.516 ************************************ 00:18:46.516 00:03:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:46.516 * Looking for test storage... 00:18:46.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:46.516 00:03:16 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.516 00:03:16 -- nvmf/common.sh@7 -- # uname -s 00:18:46.516 00:03:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.516 00:03:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.516 00:03:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.516 00:03:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.516 00:03:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.516 00:03:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.516 00:03:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.516 00:03:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.516 00:03:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.516 00:03:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.516 00:03:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.516 00:03:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.516 00:03:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.516 00:03:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.516 00:03:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.516 00:03:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.516 00:03:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.516 00:03:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.516 00:03:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.516 00:03:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.516 00:03:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.516 00:03:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.516 00:03:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.516 00:03:16 -- paths/export.sh@5 -- # export PATH 00:18:46.516 00:03:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.516 00:03:16 -- nvmf/common.sh@47 -- # : 0 00:18:46.516 00:03:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.516 00:03:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.516 00:03:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.516 00:03:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.516 00:03:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.516 00:03:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.517 00:03:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.517 00:03:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.517 00:03:16 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.517 00:03:16 -- fips/fips.sh@89 -- # check_openssl_version 00:18:46.517 00:03:16 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:46.517 00:03:16 -- fips/fips.sh@85 -- # openssl version 00:18:46.517 00:03:16 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:46.517 00:03:16 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:46.517 00:03:16 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:46.517 00:03:16 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:46.517 00:03:16 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:46.517 00:03:16 -- scripts/common.sh@333 -- # IFS=.-: 00:18:46.517 00:03:16 -- scripts/common.sh@333 -- # read -ra ver1 00:18:46.517 00:03:16 -- scripts/common.sh@334 -- # IFS=.-: 00:18:46.517 00:03:16 -- scripts/common.sh@334 -- # read -ra ver2 00:18:46.517 00:03:16 -- scripts/common.sh@335 -- # local 'op=>=' 00:18:46.517 00:03:16 -- scripts/common.sh@337 -- # ver1_l=3 00:18:46.517 00:03:16 -- scripts/common.sh@338 -- # ver2_l=3 00:18:46.517 00:03:16 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:46.517 00:03:16 -- scripts/common.sh@341 -- # case "$op" in 00:18:46.517 00:03:16 -- scripts/common.sh@345 -- # : 1 00:18:46.517 00:03:16 -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:46.517 00:03:16 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.517 00:03:16 -- scripts/common.sh@362 -- # decimal 3 00:18:46.517 00:03:16 -- scripts/common.sh@350 -- # local d=3 00:18:46.517 00:03:16 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:46.517 00:03:16 -- scripts/common.sh@352 -- # echo 3 00:18:46.517 00:03:16 -- scripts/common.sh@362 -- # ver1[v]=3 00:18:46.517 00:03:16 -- scripts/common.sh@363 -- # decimal 3 00:18:46.517 00:03:16 -- scripts/common.sh@350 -- # local d=3 00:18:46.517 00:03:16 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:46.517 00:03:16 -- scripts/common.sh@352 -- # echo 3 00:18:46.517 00:03:16 -- scripts/common.sh@363 -- # ver2[v]=3 00:18:46.517 00:03:16 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:46.517 00:03:16 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:46.517 00:03:16 -- scripts/common.sh@361 -- # (( v++ )) 00:18:46.517 00:03:16 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.517 00:03:16 -- scripts/common.sh@362 -- # decimal 0 00:18:46.517 00:03:16 -- scripts/common.sh@350 -- # local d=0 00:18:46.517 00:03:16 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:46.517 00:03:16 -- scripts/common.sh@352 -- # echo 0 00:18:46.517 00:03:16 -- scripts/common.sh@362 -- # ver1[v]=0 00:18:46.517 00:03:16 -- scripts/common.sh@363 -- # decimal 0 00:18:46.517 00:03:16 -- scripts/common.sh@350 -- # local d=0 00:18:46.517 00:03:16 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:46.517 00:03:16 -- scripts/common.sh@352 -- # echo 0 00:18:46.517 00:03:16 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:46.517 00:03:16 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:46.517 00:03:16 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:46.517 00:03:16 -- scripts/common.sh@361 -- # (( v++ )) 00:18:46.517 00:03:16 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.517 00:03:16 -- scripts/common.sh@362 -- # decimal 9 00:18:46.517 00:03:16 -- scripts/common.sh@350 -- # local d=9 00:18:46.517 00:03:16 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:46.517 00:03:16 -- scripts/common.sh@352 -- # echo 9 00:18:46.517 00:03:16 -- scripts/common.sh@362 -- # ver1[v]=9 00:18:46.517 00:03:16 -- scripts/common.sh@363 -- # decimal 0 00:18:46.517 00:03:16 -- scripts/common.sh@350 -- # local d=0 00:18:46.517 00:03:16 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:46.517 00:03:16 -- scripts/common.sh@352 -- # echo 0 00:18:46.517 00:03:16 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:46.517 00:03:16 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:46.517 00:03:16 -- scripts/common.sh@364 -- # return 0 00:18:46.517 00:03:16 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:46.517 00:03:16 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:46.517 00:03:16 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:46.517 00:03:16 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:46.517 00:03:16 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:46.517 00:03:16 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:46.517 00:03:16 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:46.517 00:03:16 -- fips/fips.sh@113 -- # build_openssl_config 00:18:46.517 00:03:16 -- fips/fips.sh@37 -- # cat 00:18:46.517 00:03:16 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:46.517 00:03:16 -- fips/fips.sh@58 -- # cat - 00:18:46.517 00:03:16 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:46.517 00:03:16 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:46.517 00:03:16 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:46.517 00:03:16 -- fips/fips.sh@116 -- # openssl list -providers 00:18:46.517 00:03:16 -- fips/fips.sh@116 -- # grep name 00:18:46.517 00:03:16 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:46.517 00:03:16 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:46.517 00:03:16 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:46.517 00:03:16 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:46.517 00:03:16 -- common/autotest_common.sh@638 -- # local es=0 00:18:46.517 00:03:16 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:46.517 00:03:16 -- fips/fips.sh@127 -- # : 00:18:46.517 00:03:16 -- common/autotest_common.sh@626 -- # local arg=openssl 00:18:46.517 00:03:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:46.517 00:03:16 -- common/autotest_common.sh@630 -- # type -t openssl 00:18:46.517 00:03:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:46.517 00:03:16 -- common/autotest_common.sh@632 -- # type -P openssl 00:18:46.517 00:03:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:46.517 00:03:16 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:18:46.517 00:03:16 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:18:46.517 00:03:16 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:18:46.517 Error setting digest 00:18:46.517 0072B120647F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:46.517 0072B120647F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:46.517 00:03:16 -- common/autotest_common.sh@641 -- # es=1 00:18:46.517 00:03:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:46.517 00:03:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:46.517 00:03:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:46.517 00:03:16 -- fips/fips.sh@130 -- # nvmftestinit 00:18:46.517 00:03:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:46.517 00:03:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.517 00:03:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:46.517 00:03:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:46.517 00:03:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:46.517 00:03:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.517 00:03:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.517 00:03:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.517 00:03:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:46.517 00:03:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:46.517 00:03:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.517 00:03:16 -- common/autotest_common.sh@10 -- # set +x 00:18:54.721 00:03:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:54.721 00:03:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:54.722 00:03:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:54.722 00:03:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:54.722 00:03:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:54.722 00:03:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:54.722 00:03:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:54.722 00:03:23 -- nvmf/common.sh@295 -- # net_devs=() 00:18:54.722 00:03:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:54.722 00:03:23 -- nvmf/common.sh@296 -- # e810=() 00:18:54.722 00:03:23 -- nvmf/common.sh@296 -- # local -ga e810 00:18:54.722 00:03:23 -- nvmf/common.sh@297 -- # x722=() 00:18:54.722 00:03:23 -- nvmf/common.sh@297 -- # local -ga x722 00:18:54.722 00:03:23 -- nvmf/common.sh@298 -- # mlx=() 00:18:54.722 00:03:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:54.722 00:03:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.722 00:03:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.722 00:03:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.722 00:03:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.722 00:03:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.722 00:03:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.722 00:03:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.722 00:03:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.722 00:03:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.722 00:03:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.722 00:03:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.722 00:03:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:54.722 00:03:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:54.722 00:03:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:54.722 00:03:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.722 00:03:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:54.722 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:54.722 00:03:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.722 00:03:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:54.722 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:54.722 00:03:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:54.722 00:03:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.722 00:03:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.722 00:03:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:54.722 00:03:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.722 00:03:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:54.722 Found net devices under 0000:31:00.0: cvl_0_0 00:18:54.722 00:03:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.722 00:03:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.722 00:03:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.722 00:03:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:54.722 00:03:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.722 00:03:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:54.722 Found net devices under 0000:31:00.1: cvl_0_1 00:18:54.722 00:03:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.722 00:03:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:54.722 00:03:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:54.722 00:03:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:54.722 00:03:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.722 00:03:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.722 00:03:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.722 00:03:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:54.722 00:03:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.722 00:03:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.722 00:03:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:54.722 00:03:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.722 00:03:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.722 00:03:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:54.722 00:03:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:54.722 00:03:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.722 00:03:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.722 00:03:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.722 00:03:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.722 00:03:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:54.722 00:03:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.722 00:03:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.722 00:03:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.722 00:03:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:54.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:18:54.722 00:18:54.722 --- 10.0.0.2 ping statistics --- 00:18:54.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.722 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:18:54.722 00:03:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:18:54.722 00:18:54.722 --- 10.0.0.1 ping statistics --- 00:18:54.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.722 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:18:54.722 00:03:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.722 00:03:23 -- nvmf/common.sh@411 -- # return 0 00:18:54.722 00:03:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:54.722 00:03:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.722 00:03:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:54.722 00:03:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.722 00:03:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:54.722 00:03:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:54.722 00:03:23 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:54.722 00:03:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:54.722 00:03:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:54.723 00:03:23 -- common/autotest_common.sh@10 -- # set +x 00:18:54.723 00:03:23 -- nvmf/common.sh@470 -- # nvmfpid=421899 00:18:54.723 00:03:23 -- nvmf/common.sh@471 -- # waitforlisten 421899 00:18:54.723 00:03:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:54.723 00:03:23 -- common/autotest_common.sh@817 -- # '[' -z 421899 ']' 00:18:54.723 00:03:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.723 00:03:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:54.723 00:03:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.723 00:03:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:54.723 00:03:23 -- common/autotest_common.sh@10 -- # set +x 00:18:54.723 [2024-04-27 00:03:24.049942] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:54.723 [2024-04-27 00:03:24.050021] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.723 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.723 [2024-04-27 00:03:24.121957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.723 [2024-04-27 00:03:24.195731] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.723 [2024-04-27 00:03:24.195772] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.723 [2024-04-27 00:03:24.195780] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.723 [2024-04-27 00:03:24.195787] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.723 [2024-04-27 00:03:24.195793] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.723 [2024-04-27 00:03:24.195811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.723 00:03:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:54.723 00:03:24 -- common/autotest_common.sh@850 -- # return 0 00:18:54.723 00:03:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:54.723 00:03:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:54.723 00:03:24 -- common/autotest_common.sh@10 -- # set +x 00:18:54.723 00:03:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.723 00:03:24 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:54.723 00:03:24 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:54.723 00:03:24 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.723 00:03:24 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:54.723 00:03:24 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.723 00:03:24 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.723 00:03:24 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.723 00:03:24 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.987 [2024-04-27 00:03:24.975157] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.987 [2024-04-27 00:03:24.991160] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.987 [2024-04-27 00:03:24.991354] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.987 [2024-04-27 00:03:25.017931] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:54.987 malloc0 00:18:54.987 00:03:25 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.987 00:03:25 -- fips/fips.sh@147 -- # bdevperf_pid=422250 00:18:54.987 00:03:25 -- fips/fips.sh@148 -- # waitforlisten 422250 /var/tmp/bdevperf.sock 00:18:54.987 00:03:25 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.987 00:03:25 -- common/autotest_common.sh@817 -- # '[' -z 422250 ']' 00:18:54.987 00:03:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.987 00:03:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:54.987 00:03:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.987 00:03:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:54.987 00:03:25 -- common/autotest_common.sh@10 -- # set +x 00:18:54.987 [2024-04-27 00:03:25.111423] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:18:54.987 [2024-04-27 00:03:25.111473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422250 ] 00:18:54.987 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.987 [2024-04-27 00:03:25.161333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.246 [2024-04-27 00:03:25.212601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.814 00:03:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:55.814 00:03:25 -- common/autotest_common.sh@850 -- # return 0 00:18:55.814 00:03:25 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:55.814 [2024-04-27 00:03:25.997573] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.814 [2024-04-27 00:03:25.997630] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:56.073 TLSTESTn1 00:18:56.073 00:03:26 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.073 Running I/O for 10 seconds... 00:19:06.071 00:19:06.071 Latency(us) 00:19:06.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.071 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:06.071 Verification LBA range: start 0x0 length 0x2000 00:19:06.071 TLSTESTn1 : 10.02 3141.54 12.27 0.00 0.00 40692.43 4696.75 77332.48 00:19:06.071 =================================================================================================================== 00:19:06.071 Total : 3141.54 12.27 0.00 0.00 40692.43 4696.75 77332.48 00:19:06.071 0 00:19:06.071 00:03:36 -- fips/fips.sh@1 -- # cleanup 00:19:06.071 00:03:36 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:06.071 00:03:36 -- common/autotest_common.sh@794 -- # type=--id 00:19:06.071 00:03:36 -- common/autotest_common.sh@795 -- # id=0 00:19:06.071 00:03:36 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:06.071 00:03:36 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:06.071 00:03:36 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:06.071 00:03:36 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:06.071 00:03:36 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:06.071 00:03:36 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:06.071 nvmf_trace.0 00:19:06.332 00:03:36 -- common/autotest_common.sh@809 -- # return 0 00:19:06.332 00:03:36 -- fips/fips.sh@16 -- # killprocess 422250 00:19:06.332 00:03:36 -- common/autotest_common.sh@936 -- # '[' -z 422250 ']' 00:19:06.332 00:03:36 -- common/autotest_common.sh@940 -- # kill -0 422250 00:19:06.332 00:03:36 -- common/autotest_common.sh@941 -- # uname 00:19:06.332 00:03:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:06.332 00:03:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 422250 00:19:06.332 00:03:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:06.332 00:03:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:06.332 00:03:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 422250' 00:19:06.332 killing process with pid 422250 00:19:06.332 00:03:36 -- common/autotest_common.sh@955 -- # kill 422250 00:19:06.332 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.332 00:19:06.332 Latency(us) 00:19:06.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.332 =================================================================================================================== 00:19:06.332 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.332 [2024-04-27 00:03:36.397660] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:06.332 00:03:36 -- common/autotest_common.sh@960 -- # wait 422250 00:19:06.332 00:03:36 -- fips/fips.sh@17 -- # nvmftestfini 00:19:06.332 00:03:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:06.332 00:03:36 -- nvmf/common.sh@117 -- # sync 00:19:06.332 00:03:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:06.332 00:03:36 -- nvmf/common.sh@120 -- # set +e 00:19:06.332 00:03:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:06.332 00:03:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:06.332 rmmod nvme_tcp 00:19:06.332 rmmod nvme_fabrics 00:19:06.594 rmmod nvme_keyring 00:19:06.594 00:03:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.594 00:03:36 -- nvmf/common.sh@124 -- # set -e 00:19:06.594 00:03:36 -- nvmf/common.sh@125 -- # return 0 00:19:06.594 00:03:36 -- nvmf/common.sh@478 -- # '[' -n 421899 ']' 00:19:06.594 00:03:36 -- nvmf/common.sh@479 -- # killprocess 421899 00:19:06.594 00:03:36 -- common/autotest_common.sh@936 -- # '[' -z 421899 ']' 00:19:06.594 00:03:36 -- common/autotest_common.sh@940 -- # kill -0 421899 00:19:06.594 00:03:36 -- common/autotest_common.sh@941 -- # uname 00:19:06.594 00:03:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:06.594 00:03:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 421899 00:19:06.594 00:03:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:06.594 00:03:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:06.594 00:03:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 421899' 00:19:06.594 killing process with pid 421899 00:19:06.594 00:03:36 -- common/autotest_common.sh@955 -- # kill 421899 00:19:06.594 [2024-04-27 00:03:36.639605] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:06.594 00:03:36 -- common/autotest_common.sh@960 -- # wait 421899 00:19:06.594 00:03:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:06.594 00:03:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:06.594 00:03:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:06.594 00:03:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.594 00:03:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.594 00:03:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.594 00:03:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.594 00:03:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.139 00:03:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:09.139 00:03:38 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:09.139 00:19:09.139 real 0m22.493s 00:19:09.139 user 0m23.304s 00:19:09.139 sys 0m9.796s 00:19:09.139 00:03:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:09.139 00:03:38 -- common/autotest_common.sh@10 -- # set +x 00:19:09.139 ************************************ 00:19:09.139 END TEST nvmf_fips 00:19:09.139 ************************************ 00:19:09.139 00:03:38 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:19:09.139 00:03:38 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:19:09.139 00:03:38 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:19:09.139 00:03:38 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:19:09.139 00:03:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:09.139 00:03:38 -- common/autotest_common.sh@10 -- # set +x 00:19:15.731 00:03:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:15.731 00:03:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:15.731 00:03:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:15.731 00:03:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:15.731 00:03:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:15.731 00:03:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:15.731 00:03:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:15.731 00:03:45 -- nvmf/common.sh@295 -- # net_devs=() 00:19:15.731 00:03:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:15.731 00:03:45 -- nvmf/common.sh@296 -- # e810=() 00:19:15.731 00:03:45 -- nvmf/common.sh@296 -- # local -ga e810 00:19:15.731 00:03:45 -- nvmf/common.sh@297 -- # x722=() 00:19:15.731 00:03:45 -- nvmf/common.sh@297 -- # local -ga x722 00:19:15.731 00:03:45 -- nvmf/common.sh@298 -- # mlx=() 00:19:15.731 00:03:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:15.731 00:03:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.731 00:03:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.731 00:03:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.731 00:03:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.731 00:03:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.731 00:03:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.731 00:03:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.731 00:03:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.731 00:03:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.731 00:03:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.731 00:03:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.731 00:03:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:15.731 00:03:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:15.731 00:03:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:15.731 00:03:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.731 00:03:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:15.731 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:15.731 00:03:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.731 00:03:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:15.731 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:15.731 00:03:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:15.731 00:03:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:15.731 00:03:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.731 00:03:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.731 00:03:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:15.731 00:03:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.731 00:03:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:15.731 Found net devices under 0000:31:00.0: cvl_0_0 00:19:15.731 00:03:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.731 00:03:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.731 00:03:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.731 00:03:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:15.731 00:03:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.731 00:03:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:15.731 Found net devices under 0000:31:00.1: cvl_0_1 00:19:15.731 00:03:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.731 00:03:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:15.731 00:03:45 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.731 00:03:45 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:19:15.731 00:03:45 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:15.731 00:03:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:15.731 00:03:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.731 00:03:45 -- common/autotest_common.sh@10 -- # set +x 00:19:15.731 ************************************ 00:19:15.732 START TEST nvmf_perf_adq 00:19:15.732 ************************************ 00:19:15.732 00:03:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:15.992 * Looking for test storage... 00:19:15.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.992 00:03:45 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.993 00:03:45 -- nvmf/common.sh@7 -- # uname -s 00:19:15.993 00:03:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.993 00:03:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.993 00:03:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.993 00:03:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.993 00:03:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.993 00:03:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.993 00:03:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.993 00:03:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.993 00:03:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.993 00:03:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.993 00:03:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.993 00:03:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.993 00:03:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.993 00:03:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.993 00:03:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.993 00:03:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.993 00:03:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:15.993 00:03:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.993 00:03:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.993 00:03:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.993 00:03:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.993 00:03:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.993 00:03:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.993 00:03:46 -- paths/export.sh@5 -- # export PATH 00:19:15.993 00:03:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.993 00:03:46 -- nvmf/common.sh@47 -- # : 0 00:19:15.993 00:03:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:15.993 00:03:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:15.993 00:03:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.993 00:03:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.993 00:03:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.993 00:03:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:15.993 00:03:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:15.993 00:03:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:15.993 00:03:46 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:15.993 00:03:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:15.993 00:03:46 -- common/autotest_common.sh@10 -- # set +x 00:19:24.134 00:03:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:24.134 00:03:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.134 00:03:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.134 00:03:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.134 00:03:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.134 00:03:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.134 00:03:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.134 00:03:52 -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.134 00:03:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.134 00:03:52 -- nvmf/common.sh@296 -- # e810=() 00:19:24.134 00:03:52 -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.134 00:03:52 -- nvmf/common.sh@297 -- # x722=() 00:19:24.134 00:03:52 -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.134 00:03:52 -- nvmf/common.sh@298 -- # mlx=() 00:19:24.134 00:03:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.134 00:03:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.134 00:03:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.134 00:03:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.134 00:03:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.134 00:03:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.134 00:03:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.134 00:03:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.134 00:03:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.134 00:03:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.134 00:03:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.134 00:03:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.134 00:03:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.134 00:03:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:24.134 00:03:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.134 00:03:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.134 00:03:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:24.134 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:24.134 00:03:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.134 00:03:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:24.134 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:24.134 00:03:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.134 00:03:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:24.134 00:03:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.134 00:03:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.134 00:03:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:24.134 00:03:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.134 00:03:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:24.134 Found net devices under 0000:31:00.0: cvl_0_0 00:19:24.134 00:03:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.134 00:03:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.134 00:03:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.134 00:03:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:24.134 00:03:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.134 00:03:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:24.135 Found net devices under 0000:31:00.1: cvl_0_1 00:19:24.135 00:03:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.135 00:03:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:24.135 00:03:52 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.135 00:03:52 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:24.135 00:03:52 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:24.135 00:03:52 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:19:24.135 00:03:52 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:24.395 00:03:54 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:26.306 00:03:56 -- target/perf_adq.sh@54 -- # sleep 5 00:19:31.594 00:04:01 -- target/perf_adq.sh@67 -- # nvmftestinit 00:19:31.594 00:04:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:31.594 00:04:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.594 00:04:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:31.594 00:04:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:31.594 00:04:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:31.594 00:04:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.594 00:04:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.594 00:04:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.594 00:04:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:31.594 00:04:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:31.594 00:04:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:31.594 00:04:01 -- common/autotest_common.sh@10 -- # set +x 00:19:31.594 00:04:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:31.594 00:04:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:31.594 00:04:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:31.594 00:04:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:31.594 00:04:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:31.594 00:04:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:31.594 00:04:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:31.594 00:04:01 -- nvmf/common.sh@295 -- # net_devs=() 00:19:31.594 00:04:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:31.594 00:04:01 -- nvmf/common.sh@296 -- # e810=() 00:19:31.594 00:04:01 -- nvmf/common.sh@296 -- # local -ga e810 00:19:31.594 00:04:01 -- nvmf/common.sh@297 -- # x722=() 00:19:31.594 00:04:01 -- nvmf/common.sh@297 -- # local -ga x722 00:19:31.594 00:04:01 -- nvmf/common.sh@298 -- # mlx=() 00:19:31.594 00:04:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:31.594 00:04:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.594 00:04:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.594 00:04:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.594 00:04:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.594 00:04:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.594 00:04:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.594 00:04:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.594 00:04:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.594 00:04:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.594 00:04:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.594 00:04:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.594 00:04:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:31.594 00:04:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:31.594 00:04:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:31.594 00:04:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:31.594 00:04:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:31.594 00:04:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:31.594 00:04:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.594 00:04:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:31.594 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:31.594 00:04:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.594 00:04:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.594 00:04:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.594 00:04:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.594 00:04:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.595 00:04:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.595 00:04:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:31.595 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:31.595 00:04:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.595 00:04:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.595 00:04:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.595 00:04:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.595 00:04:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.595 00:04:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:31.595 00:04:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:31.595 00:04:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:31.595 00:04:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.595 00:04:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.595 00:04:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:31.595 00:04:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.595 00:04:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:31.595 Found net devices under 0000:31:00.0: cvl_0_0 00:19:31.595 00:04:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.595 00:04:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.595 00:04:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.595 00:04:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:31.595 00:04:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.595 00:04:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:31.595 Found net devices under 0000:31:00.1: cvl_0_1 00:19:31.595 00:04:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.595 00:04:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:31.595 00:04:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:31.595 00:04:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:31.595 00:04:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:31.595 00:04:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:31.595 00:04:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.595 00:04:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.595 00:04:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.595 00:04:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:31.595 00:04:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.595 00:04:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.595 00:04:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:31.595 00:04:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.595 00:04:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.595 00:04:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:31.595 00:04:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:31.595 00:04:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.595 00:04:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.595 00:04:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.595 00:04:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.595 00:04:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:31.595 00:04:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.595 00:04:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.595 00:04:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.595 00:04:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:31.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:19:31.595 00:19:31.595 --- 10.0.0.2 ping statistics --- 00:19:31.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.595 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:19:31.595 00:04:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:19:31.857 00:19:31.857 --- 10.0.0.1 ping statistics --- 00:19:31.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.857 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:19:31.857 00:04:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.857 00:04:01 -- nvmf/common.sh@411 -- # return 0 00:19:31.857 00:04:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:31.857 00:04:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.857 00:04:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:31.857 00:04:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:31.857 00:04:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.857 00:04:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:31.857 00:04:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:31.857 00:04:01 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:31.857 00:04:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:31.857 00:04:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:31.857 00:04:01 -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 00:04:01 -- nvmf/common.sh@470 -- # nvmfpid=434265 00:19:31.857 00:04:01 -- nvmf/common.sh@471 -- # waitforlisten 434265 00:19:31.857 00:04:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:31.857 00:04:01 -- common/autotest_common.sh@817 -- # '[' -z 434265 ']' 00:19:31.857 00:04:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.857 00:04:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:31.857 00:04:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.857 00:04:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:31.857 00:04:01 -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 [2024-04-27 00:04:01.915866] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:19:31.857 [2024-04-27 00:04:01.915915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.857 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.857 [2024-04-27 00:04:01.982281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.857 [2024-04-27 00:04:02.047536] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.857 [2024-04-27 00:04:02.047574] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.857 [2024-04-27 00:04:02.047582] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.857 [2024-04-27 00:04:02.047588] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.857 [2024-04-27 00:04:02.047594] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.857 [2024-04-27 00:04:02.047712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.857 [2024-04-27 00:04:02.047893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.857 [2024-04-27 00:04:02.047959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.857 [2024-04-27 00:04:02.047960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.800 00:04:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:32.800 00:04:02 -- common/autotest_common.sh@850 -- # return 0 00:19:32.800 00:04:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:32.800 00:04:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:32.800 00:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.800 00:04:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.800 00:04:02 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:19:32.800 00:04:02 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:32.800 00:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.800 00:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.801 00:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.801 00:04:02 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:32.801 00:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.801 00:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.801 00:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.801 00:04:02 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:32.801 00:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.801 00:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.801 [2024-04-27 00:04:02.813793] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.801 00:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.801 00:04:02 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:32.801 00:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.801 00:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.801 Malloc1 00:19:32.801 00:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.801 00:04:02 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:32.801 00:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.801 00:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.801 00:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.801 00:04:02 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:32.801 00:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.801 00:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.801 00:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.801 00:04:02 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.801 00:04:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.801 00:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.801 [2024-04-27 00:04:02.869189] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.801 00:04:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.801 00:04:02 -- target/perf_adq.sh@73 -- # perfpid=434618 00:19:32.801 00:04:02 -- target/perf_adq.sh@74 -- # sleep 2 00:19:32.801 00:04:02 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:32.801 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.715 00:04:04 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:19:34.715 00:04:04 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:34.715 00:04:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.715 00:04:04 -- target/perf_adq.sh@76 -- # wc -l 00:19:34.715 00:04:04 -- common/autotest_common.sh@10 -- # set +x 00:19:34.715 00:04:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.715 00:04:04 -- target/perf_adq.sh@76 -- # count=4 00:19:34.715 00:04:04 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:19:34.715 00:04:04 -- target/perf_adq.sh@81 -- # wait 434618 00:19:42.857 Initializing NVMe Controllers 00:19:42.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:42.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:42.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:42.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:42.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:42.857 Initialization complete. Launching workers. 00:19:42.857 ======================================================== 00:19:42.857 Latency(us) 00:19:42.857 Device Information : IOPS MiB/s Average min max 00:19:42.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11160.80 43.60 5735.02 1451.93 9390.49 00:19:42.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15489.80 60.51 4132.13 1202.53 8583.68 00:19:42.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14695.20 57.40 4355.69 1023.47 11365.03 00:19:42.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11148.10 43.55 5741.52 1636.87 11735.84 00:19:42.857 ======================================================== 00:19:42.857 Total : 52493.88 205.05 4877.29 1023.47 11735.84 00:19:42.857 00:19:42.857 00:04:13 -- target/perf_adq.sh@82 -- # nvmftestfini 00:19:42.857 00:04:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:42.857 00:04:13 -- nvmf/common.sh@117 -- # sync 00:19:42.857 00:04:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:42.857 00:04:13 -- nvmf/common.sh@120 -- # set +e 00:19:42.857 00:04:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:42.857 00:04:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:42.857 rmmod nvme_tcp 00:19:42.857 rmmod nvme_fabrics 00:19:42.857 rmmod nvme_keyring 00:19:43.117 00:04:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.117 00:04:13 -- nvmf/common.sh@124 -- # set -e 00:19:43.117 00:04:13 -- nvmf/common.sh@125 -- # return 0 00:19:43.117 00:04:13 -- nvmf/common.sh@478 -- # '[' -n 434265 ']' 00:19:43.117 00:04:13 -- nvmf/common.sh@479 -- # killprocess 434265 00:19:43.117 00:04:13 -- common/autotest_common.sh@936 -- # '[' -z 434265 ']' 00:19:43.117 00:04:13 -- common/autotest_common.sh@940 -- # kill -0 434265 00:19:43.117 00:04:13 -- common/autotest_common.sh@941 -- # uname 00:19:43.117 00:04:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:43.117 00:04:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 434265 00:19:43.117 00:04:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:43.117 00:04:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:43.117 00:04:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 434265' 00:19:43.117 killing process with pid 434265 00:19:43.117 00:04:13 -- common/autotest_common.sh@955 -- # kill 434265 00:19:43.117 00:04:13 -- common/autotest_common.sh@960 -- # wait 434265 00:19:43.117 00:04:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:43.117 00:04:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:43.117 00:04:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:43.117 00:04:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.117 00:04:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:43.117 00:04:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.117 00:04:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.117 00:04:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.664 00:04:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:45.664 00:04:15 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:19:45.664 00:04:15 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:47.049 00:04:17 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:48.960 00:04:19 -- target/perf_adq.sh@54 -- # sleep 5 00:19:54.341 00:04:24 -- target/perf_adq.sh@87 -- # nvmftestinit 00:19:54.341 00:04:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:54.341 00:04:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.341 00:04:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:54.341 00:04:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:54.341 00:04:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:54.341 00:04:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.341 00:04:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.341 00:04:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.341 00:04:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:54.341 00:04:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.341 00:04:24 -- common/autotest_common.sh@10 -- # set +x 00:19:54.341 00:04:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:54.341 00:04:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:54.341 00:04:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:54.341 00:04:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:54.341 00:04:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:54.341 00:04:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:54.341 00:04:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:54.341 00:04:24 -- nvmf/common.sh@295 -- # net_devs=() 00:19:54.341 00:04:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:54.341 00:04:24 -- nvmf/common.sh@296 -- # e810=() 00:19:54.341 00:04:24 -- nvmf/common.sh@296 -- # local -ga e810 00:19:54.341 00:04:24 -- nvmf/common.sh@297 -- # x722=() 00:19:54.341 00:04:24 -- nvmf/common.sh@297 -- # local -ga x722 00:19:54.341 00:04:24 -- nvmf/common.sh@298 -- # mlx=() 00:19:54.341 00:04:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:54.341 00:04:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.341 00:04:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.341 00:04:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.341 00:04:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.341 00:04:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.341 00:04:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.341 00:04:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.341 00:04:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.341 00:04:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.341 00:04:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.341 00:04:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.341 00:04:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:54.341 00:04:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:54.341 00:04:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:54.341 00:04:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.341 00:04:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:54.341 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:54.341 00:04:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.341 00:04:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:54.341 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:54.341 00:04:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:54.341 00:04:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.341 00:04:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.341 00:04:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:54.341 00:04:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.341 00:04:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:54.341 Found net devices under 0000:31:00.0: cvl_0_0 00:19:54.341 00:04:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.341 00:04:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.341 00:04:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.341 00:04:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:54.341 00:04:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.341 00:04:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:54.341 Found net devices under 0000:31:00.1: cvl_0_1 00:19:54.341 00:04:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.341 00:04:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:54.341 00:04:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:54.341 00:04:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:54.341 00:04:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.341 00:04:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.341 00:04:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.341 00:04:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:54.341 00:04:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.341 00:04:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.341 00:04:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:54.341 00:04:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.341 00:04:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.341 00:04:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:54.341 00:04:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:54.341 00:04:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.341 00:04:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.341 00:04:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.341 00:04:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.341 00:04:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:54.341 00:04:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.341 00:04:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.341 00:04:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.341 00:04:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:54.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:19:54.341 00:19:54.341 --- 10.0.0.2 ping statistics --- 00:19:54.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.341 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:19:54.341 00:04:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:19:54.341 00:19:54.341 --- 10.0.0.1 ping statistics --- 00:19:54.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.341 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:19:54.341 00:04:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.341 00:04:24 -- nvmf/common.sh@411 -- # return 0 00:19:54.341 00:04:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:54.341 00:04:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.341 00:04:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:54.341 00:04:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.341 00:04:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:54.341 00:04:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:54.342 00:04:24 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:19:54.342 00:04:24 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:54.342 00:04:24 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:54.342 00:04:24 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:54.342 net.core.busy_poll = 1 00:19:54.342 00:04:24 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:54.342 net.core.busy_read = 1 00:19:54.342 00:04:24 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:54.342 00:04:24 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:54.603 00:04:24 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:54.603 00:04:24 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:54.603 00:04:24 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:54.603 00:04:24 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:54.603 00:04:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:54.603 00:04:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:54.603 00:04:24 -- common/autotest_common.sh@10 -- # set +x 00:19:54.603 00:04:24 -- nvmf/common.sh@470 -- # nvmfpid=439243 00:19:54.603 00:04:24 -- nvmf/common.sh@471 -- # waitforlisten 439243 00:19:54.603 00:04:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:54.603 00:04:24 -- common/autotest_common.sh@817 -- # '[' -z 439243 ']' 00:19:54.603 00:04:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.603 00:04:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:54.603 00:04:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.603 00:04:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:54.603 00:04:24 -- common/autotest_common.sh@10 -- # set +x 00:19:54.603 [2024-04-27 00:04:24.784460] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:19:54.603 [2024-04-27 00:04:24.784532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.603 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.864 [2024-04-27 00:04:24.855614] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.864 [2024-04-27 00:04:24.930114] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.864 [2024-04-27 00:04:24.930157] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.864 [2024-04-27 00:04:24.930164] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.864 [2024-04-27 00:04:24.930171] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.864 [2024-04-27 00:04:24.930177] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.864 [2024-04-27 00:04:24.930286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.864 [2024-04-27 00:04:24.930426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.864 [2024-04-27 00:04:24.930586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.864 [2024-04-27 00:04:24.930587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.438 00:04:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:55.438 00:04:25 -- common/autotest_common.sh@850 -- # return 0 00:19:55.438 00:04:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:55.438 00:04:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:55.438 00:04:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.438 00:04:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.438 00:04:25 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:19:55.438 00:04:25 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:55.438 00:04:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.438 00:04:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.438 00:04:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.438 00:04:25 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:55.438 00:04:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.438 00:04:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.698 00:04:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.698 00:04:25 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:55.698 00:04:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.698 00:04:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.698 [2024-04-27 00:04:25.683095] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.698 00:04:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.698 00:04:25 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:55.698 00:04:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.698 00:04:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.698 Malloc1 00:19:55.698 00:04:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.698 00:04:25 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.698 00:04:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.698 00:04:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.698 00:04:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.698 00:04:25 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:55.698 00:04:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.698 00:04:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.698 00:04:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.698 00:04:25 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.698 00:04:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.698 00:04:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.698 [2024-04-27 00:04:25.742420] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.698 00:04:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.698 00:04:25 -- target/perf_adq.sh@94 -- # perfpid=439438 00:19:55.698 00:04:25 -- target/perf_adq.sh@95 -- # sleep 2 00:19:55.698 00:04:25 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:55.699 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.611 00:04:27 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:19:57.611 00:04:27 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:57.611 00:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.611 00:04:27 -- target/perf_adq.sh@97 -- # wc -l 00:19:57.611 00:04:27 -- common/autotest_common.sh@10 -- # set +x 00:19:57.611 00:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.611 00:04:27 -- target/perf_adq.sh@97 -- # count=2 00:19:57.611 00:04:27 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:19:57.611 00:04:27 -- target/perf_adq.sh@103 -- # wait 439438 00:20:05.758 Initializing NVMe Controllers 00:20:05.758 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:05.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:05.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:05.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:05.758 Initialization complete. Launching workers. 00:20:05.758 ======================================================== 00:20:05.758 Latency(us) 00:20:05.758 Device Information : IOPS MiB/s Average min max 00:20:05.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8558.20 33.43 7481.09 1399.33 52495.42 00:20:05.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11731.70 45.83 5456.10 1206.76 49658.83 00:20:05.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10245.30 40.02 6247.66 1225.59 50039.59 00:20:05.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6842.00 26.73 9353.64 1590.54 54012.41 00:20:05.758 ======================================================== 00:20:05.758 Total : 37377.20 146.00 6850.19 1206.76 54012.41 00:20:05.758 00:20:05.758 00:04:35 -- target/perf_adq.sh@104 -- # nvmftestfini 00:20:05.758 00:04:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:05.758 00:04:35 -- nvmf/common.sh@117 -- # sync 00:20:05.758 00:04:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.758 00:04:35 -- nvmf/common.sh@120 -- # set +e 00:20:05.758 00:04:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.758 00:04:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.758 rmmod nvme_tcp 00:20:05.758 rmmod nvme_fabrics 00:20:05.758 rmmod nvme_keyring 00:20:05.758 00:04:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.758 00:04:35 -- nvmf/common.sh@124 -- # set -e 00:20:05.758 00:04:35 -- nvmf/common.sh@125 -- # return 0 00:20:05.758 00:04:35 -- nvmf/common.sh@478 -- # '[' -n 439243 ']' 00:20:05.758 00:04:35 -- nvmf/common.sh@479 -- # killprocess 439243 00:20:05.758 00:04:35 -- common/autotest_common.sh@936 -- # '[' -z 439243 ']' 00:20:05.758 00:04:35 -- common/autotest_common.sh@940 -- # kill -0 439243 00:20:05.758 00:04:35 -- common/autotest_common.sh@941 -- # uname 00:20:05.758 00:04:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.758 00:04:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 439243 00:20:06.020 00:04:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:06.020 00:04:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:06.020 00:04:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 439243' 00:20:06.020 killing process with pid 439243 00:20:06.020 00:04:35 -- common/autotest_common.sh@955 -- # kill 439243 00:20:06.020 00:04:35 -- common/autotest_common.sh@960 -- # wait 439243 00:20:06.020 00:04:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:06.020 00:04:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:06.020 00:04:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:06.020 00:04:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.020 00:04:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.020 00:04:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.020 00:04:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.020 00:04:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.325 00:04:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.325 00:04:39 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:20:09.325 00:20:09.325 real 0m53.339s 00:20:09.325 user 2m48.960s 00:20:09.325 sys 0m10.304s 00:20:09.325 00:04:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:09.325 00:04:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.325 ************************************ 00:20:09.325 END TEST nvmf_perf_adq 00:20:09.325 ************************************ 00:20:09.325 00:04:39 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:09.325 00:04:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:09.325 00:04:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:09.325 00:04:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.325 ************************************ 00:20:09.325 START TEST nvmf_shutdown 00:20:09.325 ************************************ 00:20:09.325 00:04:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:09.325 * Looking for test storage... 00:20:09.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:09.325 00:04:39 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.325 00:04:39 -- nvmf/common.sh@7 -- # uname -s 00:20:09.325 00:04:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.325 00:04:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.325 00:04:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.325 00:04:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.325 00:04:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.325 00:04:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.325 00:04:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.325 00:04:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.325 00:04:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.325 00:04:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.325 00:04:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.325 00:04:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.325 00:04:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.325 00:04:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.325 00:04:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.325 00:04:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.325 00:04:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.325 00:04:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.325 00:04:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.325 00:04:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.325 00:04:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.325 00:04:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.325 00:04:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.587 00:04:39 -- paths/export.sh@5 -- # export PATH 00:20:09.587 00:04:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.587 00:04:39 -- nvmf/common.sh@47 -- # : 0 00:20:09.587 00:04:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.587 00:04:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.587 00:04:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.587 00:04:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.587 00:04:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.587 00:04:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.587 00:04:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.587 00:04:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.587 00:04:39 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:09.587 00:04:39 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:09.587 00:04:39 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:09.587 00:04:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:09.587 00:04:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:09.587 00:04:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.587 ************************************ 00:20:09.587 START TEST nvmf_shutdown_tc1 00:20:09.587 ************************************ 00:20:09.587 00:04:39 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:20:09.587 00:04:39 -- target/shutdown.sh@74 -- # starttarget 00:20:09.587 00:04:39 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:09.587 00:04:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:09.587 00:04:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.587 00:04:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:09.587 00:04:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:09.587 00:04:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:09.587 00:04:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.587 00:04:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.587 00:04:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.587 00:04:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:09.587 00:04:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:09.587 00:04:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.587 00:04:39 -- common/autotest_common.sh@10 -- # set +x 00:20:17.739 00:04:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:17.739 00:04:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.739 00:04:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.739 00:04:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.739 00:04:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.739 00:04:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.739 00:04:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.739 00:04:46 -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.739 00:04:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.739 00:04:46 -- nvmf/common.sh@296 -- # e810=() 00:20:17.739 00:04:46 -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.739 00:04:46 -- nvmf/common.sh@297 -- # x722=() 00:20:17.739 00:04:46 -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.739 00:04:46 -- nvmf/common.sh@298 -- # mlx=() 00:20:17.739 00:04:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.739 00:04:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.739 00:04:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.739 00:04:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.739 00:04:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.739 00:04:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.739 00:04:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.739 00:04:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.739 00:04:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.739 00:04:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.739 00:04:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.739 00:04:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.739 00:04:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.739 00:04:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.739 00:04:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.739 00:04:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.739 00:04:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:17.739 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:17.739 00:04:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.739 00:04:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:17.739 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:17.739 00:04:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.739 00:04:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.739 00:04:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.739 00:04:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:17.739 00:04:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.739 00:04:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:17.739 Found net devices under 0000:31:00.0: cvl_0_0 00:20:17.739 00:04:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.739 00:04:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.739 00:04:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.739 00:04:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:17.739 00:04:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.739 00:04:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:17.739 Found net devices under 0000:31:00.1: cvl_0_1 00:20:17.739 00:04:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.739 00:04:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:17.739 00:04:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:17.739 00:04:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:17.739 00:04:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:17.739 00:04:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.739 00:04:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.739 00:04:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.739 00:04:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.739 00:04:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.739 00:04:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.739 00:04:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.739 00:04:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.739 00:04:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.739 00:04:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.739 00:04:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.740 00:04:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.740 00:04:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.740 00:04:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.740 00:04:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.740 00:04:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.740 00:04:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.740 00:04:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.740 00:04:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.740 00:04:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:20:17.740 00:20:17.740 --- 10.0.0.2 ping statistics --- 00:20:17.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.740 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:20:17.740 00:04:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:20:17.740 00:20:17.740 --- 10.0.0.1 ping statistics --- 00:20:17.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.740 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:20:17.740 00:04:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.740 00:04:46 -- nvmf/common.sh@411 -- # return 0 00:20:17.740 00:04:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:17.740 00:04:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.740 00:04:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:17.740 00:04:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:17.740 00:04:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.740 00:04:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:17.740 00:04:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:17.740 00:04:46 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:17.740 00:04:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:17.740 00:04:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:17.740 00:04:46 -- common/autotest_common.sh@10 -- # set +x 00:20:17.740 00:04:46 -- nvmf/common.sh@470 -- # nvmfpid=445993 00:20:17.740 00:04:46 -- nvmf/common.sh@471 -- # waitforlisten 445993 00:20:17.740 00:04:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:17.740 00:04:46 -- common/autotest_common.sh@817 -- # '[' -z 445993 ']' 00:20:17.740 00:04:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.740 00:04:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:17.740 00:04:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.740 00:04:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:17.740 00:04:46 -- common/autotest_common.sh@10 -- # set +x 00:20:17.740 [2024-04-27 00:04:46.873730] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:20:17.740 [2024-04-27 00:04:46.873795] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.740 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.740 [2024-04-27 00:04:46.948052] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.740 [2024-04-27 00:04:47.022589] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.740 [2024-04-27 00:04:47.022631] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.740 [2024-04-27 00:04:47.022638] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.740 [2024-04-27 00:04:47.022645] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.740 [2024-04-27 00:04:47.022651] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.740 [2024-04-27 00:04:47.022816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.740 [2024-04-27 00:04:47.022978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.740 [2024-04-27 00:04:47.023214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:17.740 [2024-04-27 00:04:47.023215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.740 00:04:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:17.740 00:04:47 -- common/autotest_common.sh@850 -- # return 0 00:20:17.740 00:04:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:17.740 00:04:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:17.740 00:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:17.740 00:04:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.740 00:04:47 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:17.740 00:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.740 00:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:17.740 [2024-04-27 00:04:47.700397] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.740 00:04:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.740 00:04:47 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:17.740 00:04:47 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:17.740 00:04:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:17.740 00:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:17.740 00:04:47 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:17.740 00:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.740 00:04:47 -- target/shutdown.sh@28 -- # cat 00:20:17.740 00:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.740 00:04:47 -- target/shutdown.sh@28 -- # cat 00:20:17.740 00:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.740 00:04:47 -- target/shutdown.sh@28 -- # cat 00:20:17.740 00:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.740 00:04:47 -- target/shutdown.sh@28 -- # cat 00:20:17.740 00:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.740 00:04:47 -- target/shutdown.sh@28 -- # cat 00:20:17.740 00:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.740 00:04:47 -- target/shutdown.sh@28 -- # cat 00:20:17.740 00:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.740 00:04:47 -- target/shutdown.sh@28 -- # cat 00:20:17.740 00:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.740 00:04:47 -- target/shutdown.sh@28 -- # cat 00:20:17.740 00:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.740 00:04:47 -- target/shutdown.sh@28 -- # cat 00:20:17.740 00:04:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:17.740 00:04:47 -- target/shutdown.sh@28 -- # cat 00:20:17.740 00:04:47 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:17.740 00:04:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.740 00:04:47 -- common/autotest_common.sh@10 -- # set +x 00:20:17.740 Malloc1 00:20:17.740 [2024-04-27 00:04:47.803922] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.740 Malloc2 00:20:17.740 Malloc3 00:20:17.740 Malloc4 00:20:17.740 Malloc5 00:20:18.002 Malloc6 00:20:18.002 Malloc7 00:20:18.002 Malloc8 00:20:18.002 Malloc9 00:20:18.002 Malloc10 00:20:18.002 00:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.002 00:04:48 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:18.002 00:04:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:18.002 00:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:18.002 00:04:48 -- target/shutdown.sh@78 -- # perfpid=446359 00:20:18.002 00:04:48 -- target/shutdown.sh@79 -- # waitforlisten 446359 /var/tmp/bdevperf.sock 00:20:18.002 00:04:48 -- common/autotest_common.sh@817 -- # '[' -z 446359 ']' 00:20:18.002 00:04:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.002 00:04:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:18.002 00:04:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.002 00:04:48 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:18.002 00:04:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:18.002 00:04:48 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:18.002 00:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:18.002 00:04:48 -- nvmf/common.sh@521 -- # config=() 00:20:18.002 00:04:48 -- nvmf/common.sh@521 -- # local subsystem config 00:20:18.002 00:04:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.002 00:04:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.002 { 00:20:18.002 "params": { 00:20:18.002 "name": "Nvme$subsystem", 00:20:18.002 "trtype": "$TEST_TRANSPORT", 00:20:18.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.002 "adrfam": "ipv4", 00:20:18.002 "trsvcid": "$NVMF_PORT", 00:20:18.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.002 "hdgst": ${hdgst:-false}, 00:20:18.002 "ddgst": ${ddgst:-false} 00:20:18.002 }, 00:20:18.002 "method": "bdev_nvme_attach_controller" 00:20:18.002 } 00:20:18.002 EOF 00:20:18.002 )") 00:20:18.002 00:04:48 -- nvmf/common.sh@543 -- # cat 00:20:18.002 00:04:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.002 00:04:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.002 { 00:20:18.002 "params": { 00:20:18.002 "name": "Nvme$subsystem", 00:20:18.002 "trtype": "$TEST_TRANSPORT", 00:20:18.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.002 "adrfam": "ipv4", 00:20:18.002 "trsvcid": "$NVMF_PORT", 00:20:18.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.002 "hdgst": ${hdgst:-false}, 00:20:18.002 "ddgst": ${ddgst:-false} 00:20:18.002 }, 00:20:18.002 "method": "bdev_nvme_attach_controller" 00:20:18.002 } 00:20:18.002 EOF 00:20:18.002 )") 00:20:18.002 00:04:48 -- nvmf/common.sh@543 -- # cat 00:20:18.002 00:04:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.263 00:04:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.263 { 00:20:18.263 "params": { 00:20:18.263 "name": "Nvme$subsystem", 00:20:18.263 "trtype": "$TEST_TRANSPORT", 00:20:18.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.263 "adrfam": "ipv4", 00:20:18.263 "trsvcid": "$NVMF_PORT", 00:20:18.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.263 "hdgst": ${hdgst:-false}, 00:20:18.263 "ddgst": ${ddgst:-false} 00:20:18.263 }, 00:20:18.263 "method": "bdev_nvme_attach_controller" 00:20:18.263 } 00:20:18.263 EOF 00:20:18.263 )") 00:20:18.263 00:04:48 -- nvmf/common.sh@543 -- # cat 00:20:18.263 00:04:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.263 00:04:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.263 { 00:20:18.263 "params": { 00:20:18.263 "name": "Nvme$subsystem", 00:20:18.263 "trtype": "$TEST_TRANSPORT", 00:20:18.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.263 "adrfam": "ipv4", 00:20:18.263 "trsvcid": "$NVMF_PORT", 00:20:18.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.263 "hdgst": ${hdgst:-false}, 00:20:18.263 "ddgst": ${ddgst:-false} 00:20:18.263 }, 00:20:18.263 "method": "bdev_nvme_attach_controller" 00:20:18.263 } 00:20:18.263 EOF 00:20:18.263 )") 00:20:18.263 00:04:48 -- nvmf/common.sh@543 -- # cat 00:20:18.263 00:04:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.263 00:04:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.264 { 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme$subsystem", 00:20:18.264 "trtype": "$TEST_TRANSPORT", 00:20:18.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "$NVMF_PORT", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.264 "hdgst": ${hdgst:-false}, 00:20:18.264 "ddgst": ${ddgst:-false} 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 } 00:20:18.264 EOF 00:20:18.264 )") 00:20:18.264 00:04:48 -- nvmf/common.sh@543 -- # cat 00:20:18.264 00:04:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.264 00:04:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.264 { 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme$subsystem", 00:20:18.264 "trtype": "$TEST_TRANSPORT", 00:20:18.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "$NVMF_PORT", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.264 "hdgst": ${hdgst:-false}, 00:20:18.264 "ddgst": ${ddgst:-false} 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 } 00:20:18.264 EOF 00:20:18.264 )") 00:20:18.264 00:04:48 -- nvmf/common.sh@543 -- # cat 00:20:18.264 [2024-04-27 00:04:48.248024] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:20:18.264 [2024-04-27 00:04:48.248075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:18.264 00:04:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.264 00:04:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.264 { 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme$subsystem", 00:20:18.264 "trtype": "$TEST_TRANSPORT", 00:20:18.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "$NVMF_PORT", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.264 "hdgst": ${hdgst:-false}, 00:20:18.264 "ddgst": ${ddgst:-false} 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 } 00:20:18.264 EOF 00:20:18.264 )") 00:20:18.264 00:04:48 -- nvmf/common.sh@543 -- # cat 00:20:18.264 00:04:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.264 00:04:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.264 { 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme$subsystem", 00:20:18.264 "trtype": "$TEST_TRANSPORT", 00:20:18.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "$NVMF_PORT", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.264 "hdgst": ${hdgst:-false}, 00:20:18.264 "ddgst": ${ddgst:-false} 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 } 00:20:18.264 EOF 00:20:18.264 )") 00:20:18.264 00:04:48 -- nvmf/common.sh@543 -- # cat 00:20:18.264 00:04:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.264 00:04:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.264 { 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme$subsystem", 00:20:18.264 "trtype": "$TEST_TRANSPORT", 00:20:18.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "$NVMF_PORT", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.264 "hdgst": ${hdgst:-false}, 00:20:18.264 "ddgst": ${ddgst:-false} 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 } 00:20:18.264 EOF 00:20:18.264 )") 00:20:18.264 00:04:48 -- nvmf/common.sh@543 -- # cat 00:20:18.264 00:04:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.264 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.264 00:04:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.264 { 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme$subsystem", 00:20:18.264 "trtype": "$TEST_TRANSPORT", 00:20:18.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "$NVMF_PORT", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.264 "hdgst": ${hdgst:-false}, 00:20:18.264 "ddgst": ${ddgst:-false} 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 } 00:20:18.264 EOF 00:20:18.264 )") 00:20:18.264 00:04:48 -- nvmf/common.sh@543 -- # cat 00:20:18.264 00:04:48 -- nvmf/common.sh@545 -- # jq . 00:20:18.264 00:04:48 -- nvmf/common.sh@546 -- # IFS=, 00:20:18.264 00:04:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme1", 00:20:18.264 "trtype": "tcp", 00:20:18.264 "traddr": "10.0.0.2", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "4420", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.264 "hdgst": false, 00:20:18.264 "ddgst": false 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 },{ 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme2", 00:20:18.264 "trtype": "tcp", 00:20:18.264 "traddr": "10.0.0.2", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "4420", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:18.264 "hdgst": false, 00:20:18.264 "ddgst": false 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 },{ 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme3", 00:20:18.264 "trtype": "tcp", 00:20:18.264 "traddr": "10.0.0.2", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "4420", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:18.264 "hdgst": false, 00:20:18.264 "ddgst": false 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 },{ 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme4", 00:20:18.264 "trtype": "tcp", 00:20:18.264 "traddr": "10.0.0.2", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "4420", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:18.264 "hdgst": false, 00:20:18.264 "ddgst": false 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 },{ 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme5", 00:20:18.264 "trtype": "tcp", 00:20:18.264 "traddr": "10.0.0.2", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "4420", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:18.264 "hdgst": false, 00:20:18.264 "ddgst": false 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 },{ 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme6", 00:20:18.264 "trtype": "tcp", 00:20:18.264 "traddr": "10.0.0.2", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "4420", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:18.264 "hdgst": false, 00:20:18.264 "ddgst": false 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 },{ 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme7", 00:20:18.264 "trtype": "tcp", 00:20:18.264 "traddr": "10.0.0.2", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "4420", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:18.264 "hdgst": false, 00:20:18.264 "ddgst": false 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 },{ 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme8", 00:20:18.264 "trtype": "tcp", 00:20:18.264 "traddr": "10.0.0.2", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "4420", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:18.264 "hdgst": false, 00:20:18.264 "ddgst": false 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 },{ 00:20:18.264 "params": { 00:20:18.264 "name": "Nvme9", 00:20:18.264 "trtype": "tcp", 00:20:18.264 "traddr": "10.0.0.2", 00:20:18.264 "adrfam": "ipv4", 00:20:18.264 "trsvcid": "4420", 00:20:18.264 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:18.264 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:18.264 "hdgst": false, 00:20:18.264 "ddgst": false 00:20:18.264 }, 00:20:18.264 "method": "bdev_nvme_attach_controller" 00:20:18.264 },{ 00:20:18.265 "params": { 00:20:18.265 "name": "Nvme10", 00:20:18.265 "trtype": "tcp", 00:20:18.265 "traddr": "10.0.0.2", 00:20:18.265 "adrfam": "ipv4", 00:20:18.265 "trsvcid": "4420", 00:20:18.265 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:18.265 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:18.265 "hdgst": false, 00:20:18.265 "ddgst": false 00:20:18.265 }, 00:20:18.265 "method": "bdev_nvme_attach_controller" 00:20:18.265 }' 00:20:18.265 [2024-04-27 00:04:48.309275] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.265 [2024-04-27 00:04:48.374133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.651 00:04:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:19.651 00:04:49 -- common/autotest_common.sh@850 -- # return 0 00:20:19.651 00:04:49 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:19.651 00:04:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.651 00:04:49 -- common/autotest_common.sh@10 -- # set +x 00:20:19.651 00:04:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.651 00:04:49 -- target/shutdown.sh@83 -- # kill -9 446359 00:20:19.651 00:04:49 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:19.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 446359 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:19.652 00:04:49 -- target/shutdown.sh@87 -- # sleep 1 00:20:20.595 00:04:50 -- target/shutdown.sh@88 -- # kill -0 445993 00:20:20.595 00:04:50 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:20.595 00:04:50 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:20.595 00:04:50 -- nvmf/common.sh@521 -- # config=() 00:20:20.595 00:04:50 -- nvmf/common.sh@521 -- # local subsystem config 00:20:20.595 00:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.595 { 00:20:20.595 "params": { 00:20:20.595 "name": "Nvme$subsystem", 00:20:20.595 "trtype": "$TEST_TRANSPORT", 00:20:20.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.595 "adrfam": "ipv4", 00:20:20.595 "trsvcid": "$NVMF_PORT", 00:20:20.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.595 "hdgst": ${hdgst:-false}, 00:20:20.595 "ddgst": ${ddgst:-false} 00:20:20.595 }, 00:20:20.595 "method": "bdev_nvme_attach_controller" 00:20:20.595 } 00:20:20.595 EOF 00:20:20.595 )") 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # cat 00:20:20.595 00:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.595 { 00:20:20.595 "params": { 00:20:20.595 "name": "Nvme$subsystem", 00:20:20.595 "trtype": "$TEST_TRANSPORT", 00:20:20.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.595 "adrfam": "ipv4", 00:20:20.595 "trsvcid": "$NVMF_PORT", 00:20:20.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.595 "hdgst": ${hdgst:-false}, 00:20:20.595 "ddgst": ${ddgst:-false} 00:20:20.595 }, 00:20:20.595 "method": "bdev_nvme_attach_controller" 00:20:20.595 } 00:20:20.595 EOF 00:20:20.595 )") 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # cat 00:20:20.595 00:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.595 { 00:20:20.595 "params": { 00:20:20.595 "name": "Nvme$subsystem", 00:20:20.595 "trtype": "$TEST_TRANSPORT", 00:20:20.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.595 "adrfam": "ipv4", 00:20:20.595 "trsvcid": "$NVMF_PORT", 00:20:20.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.595 "hdgst": ${hdgst:-false}, 00:20:20.595 "ddgst": ${ddgst:-false} 00:20:20.595 }, 00:20:20.595 "method": "bdev_nvme_attach_controller" 00:20:20.595 } 00:20:20.595 EOF 00:20:20.595 )") 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # cat 00:20:20.595 00:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.595 { 00:20:20.595 "params": { 00:20:20.595 "name": "Nvme$subsystem", 00:20:20.595 "trtype": "$TEST_TRANSPORT", 00:20:20.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.595 "adrfam": "ipv4", 00:20:20.595 "trsvcid": "$NVMF_PORT", 00:20:20.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.595 "hdgst": ${hdgst:-false}, 00:20:20.595 "ddgst": ${ddgst:-false} 00:20:20.595 }, 00:20:20.595 "method": "bdev_nvme_attach_controller" 00:20:20.595 } 00:20:20.595 EOF 00:20:20.595 )") 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # cat 00:20:20.595 00:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.595 { 00:20:20.595 "params": { 00:20:20.595 "name": "Nvme$subsystem", 00:20:20.595 "trtype": "$TEST_TRANSPORT", 00:20:20.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.595 "adrfam": "ipv4", 00:20:20.595 "trsvcid": "$NVMF_PORT", 00:20:20.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.595 "hdgst": ${hdgst:-false}, 00:20:20.595 "ddgst": ${ddgst:-false} 00:20:20.595 }, 00:20:20.595 "method": "bdev_nvme_attach_controller" 00:20:20.595 } 00:20:20.595 EOF 00:20:20.595 )") 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # cat 00:20:20.595 00:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.595 { 00:20:20.595 "params": { 00:20:20.595 "name": "Nvme$subsystem", 00:20:20.595 "trtype": "$TEST_TRANSPORT", 00:20:20.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.595 "adrfam": "ipv4", 00:20:20.595 "trsvcid": "$NVMF_PORT", 00:20:20.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.595 "hdgst": ${hdgst:-false}, 00:20:20.595 "ddgst": ${ddgst:-false} 00:20:20.595 }, 00:20:20.595 "method": "bdev_nvme_attach_controller" 00:20:20.595 } 00:20:20.595 EOF 00:20:20.595 )") 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # cat 00:20:20.595 [2024-04-27 00:04:50.652373] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:20:20.595 [2024-04-27 00:04:50.652427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid446895 ] 00:20:20.595 00:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.595 { 00:20:20.595 "params": { 00:20:20.595 "name": "Nvme$subsystem", 00:20:20.595 "trtype": "$TEST_TRANSPORT", 00:20:20.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.595 "adrfam": "ipv4", 00:20:20.595 "trsvcid": "$NVMF_PORT", 00:20:20.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.595 "hdgst": ${hdgst:-false}, 00:20:20.595 "ddgst": ${ddgst:-false} 00:20:20.595 }, 00:20:20.595 "method": "bdev_nvme_attach_controller" 00:20:20.595 } 00:20:20.595 EOF 00:20:20.595 )") 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # cat 00:20:20.595 00:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.595 { 00:20:20.595 "params": { 00:20:20.595 "name": "Nvme$subsystem", 00:20:20.595 "trtype": "$TEST_TRANSPORT", 00:20:20.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.595 "adrfam": "ipv4", 00:20:20.595 "trsvcid": "$NVMF_PORT", 00:20:20.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.595 "hdgst": ${hdgst:-false}, 00:20:20.595 "ddgst": ${ddgst:-false} 00:20:20.595 }, 00:20:20.595 "method": "bdev_nvme_attach_controller" 00:20:20.595 } 00:20:20.595 EOF 00:20:20.595 )") 00:20:20.595 00:04:50 -- nvmf/common.sh@543 -- # cat 00:20:20.596 00:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.596 00:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.596 { 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme$subsystem", 00:20:20.596 "trtype": "$TEST_TRANSPORT", 00:20:20.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "$NVMF_PORT", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.596 "hdgst": ${hdgst:-false}, 00:20:20.596 "ddgst": ${ddgst:-false} 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 } 00:20:20.596 EOF 00:20:20.596 )") 00:20:20.596 00:04:50 -- nvmf/common.sh@543 -- # cat 00:20:20.596 00:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.596 00:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.596 { 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme$subsystem", 00:20:20.596 "trtype": "$TEST_TRANSPORT", 00:20:20.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "$NVMF_PORT", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.596 "hdgst": ${hdgst:-false}, 00:20:20.596 "ddgst": ${ddgst:-false} 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 } 00:20:20.596 EOF 00:20:20.596 )") 00:20:20.596 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.596 00:04:50 -- nvmf/common.sh@543 -- # cat 00:20:20.596 00:04:50 -- nvmf/common.sh@545 -- # jq . 00:20:20.596 00:04:50 -- nvmf/common.sh@546 -- # IFS=, 00:20:20.596 00:04:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme1", 00:20:20.596 "trtype": "tcp", 00:20:20.596 "traddr": "10.0.0.2", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "4420", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.596 "hdgst": false, 00:20:20.596 "ddgst": false 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 },{ 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme2", 00:20:20.596 "trtype": "tcp", 00:20:20.596 "traddr": "10.0.0.2", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "4420", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:20.596 "hdgst": false, 00:20:20.596 "ddgst": false 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 },{ 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme3", 00:20:20.596 "trtype": "tcp", 00:20:20.596 "traddr": "10.0.0.2", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "4420", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:20.596 "hdgst": false, 00:20:20.596 "ddgst": false 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 },{ 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme4", 00:20:20.596 "trtype": "tcp", 00:20:20.596 "traddr": "10.0.0.2", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "4420", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:20.596 "hdgst": false, 00:20:20.596 "ddgst": false 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 },{ 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme5", 00:20:20.596 "trtype": "tcp", 00:20:20.596 "traddr": "10.0.0.2", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "4420", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:20.596 "hdgst": false, 00:20:20.596 "ddgst": false 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 },{ 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme6", 00:20:20.596 "trtype": "tcp", 00:20:20.596 "traddr": "10.0.0.2", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "4420", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:20.596 "hdgst": false, 00:20:20.596 "ddgst": false 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 },{ 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme7", 00:20:20.596 "trtype": "tcp", 00:20:20.596 "traddr": "10.0.0.2", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "4420", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:20.596 "hdgst": false, 00:20:20.596 "ddgst": false 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 },{ 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme8", 00:20:20.596 "trtype": "tcp", 00:20:20.596 "traddr": "10.0.0.2", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "4420", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:20.596 "hdgst": false, 00:20:20.596 "ddgst": false 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 },{ 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme9", 00:20:20.596 "trtype": "tcp", 00:20:20.596 "traddr": "10.0.0.2", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "4420", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:20.596 "hdgst": false, 00:20:20.596 "ddgst": false 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 },{ 00:20:20.596 "params": { 00:20:20.596 "name": "Nvme10", 00:20:20.596 "trtype": "tcp", 00:20:20.596 "traddr": "10.0.0.2", 00:20:20.596 "adrfam": "ipv4", 00:20:20.596 "trsvcid": "4420", 00:20:20.596 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:20.596 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:20.596 "hdgst": false, 00:20:20.596 "ddgst": false 00:20:20.596 }, 00:20:20.596 "method": "bdev_nvme_attach_controller" 00:20:20.596 }' 00:20:20.596 [2024-04-27 00:04:50.713922] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.596 [2024-04-27 00:04:50.778148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.512 Running I/O for 1 seconds... 00:20:23.456 00:20:23.456 Latency(us) 00:20:23.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.456 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.456 Verification LBA range: start 0x0 length 0x400 00:20:23.456 Nvme1n1 : 1.03 185.51 11.59 0.00 0.00 341444.84 20971.52 281367.89 00:20:23.456 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.456 Verification LBA range: start 0x0 length 0x400 00:20:23.456 Nvme2n1 : 1.05 247.53 15.47 0.00 0.00 244883.26 19005.44 248162.99 00:20:23.456 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.456 Verification LBA range: start 0x0 length 0x400 00:20:23.456 Nvme3n1 : 1.14 224.21 14.01 0.00 0.00 273018.67 18459.31 253405.87 00:20:23.456 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.456 Verification LBA range: start 0x0 length 0x400 00:20:23.456 Nvme4n1 : 1.15 223.03 13.94 0.00 0.00 269652.27 18896.21 248162.99 00:20:23.456 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.456 Verification LBA range: start 0x0 length 0x400 00:20:23.456 Nvme5n1 : 1.15 222.68 13.92 0.00 0.00 265422.40 13871.79 251658.24 00:20:23.456 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.456 Verification LBA range: start 0x0 length 0x400 00:20:23.456 Nvme6n1 : 1.18 270.10 16.88 0.00 0.00 215595.18 18459.31 248162.99 00:20:23.456 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.456 Verification LBA range: start 0x0 length 0x400 00:20:23.456 Nvme7n1 : 1.18 272.05 17.00 0.00 0.00 206713.17 11687.25 249910.61 00:20:23.456 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.456 Verification LBA range: start 0x0 length 0x400 00:20:23.456 Nvme8n1 : 1.19 271.98 17.00 0.00 0.00 205829.56 3031.04 246415.36 00:20:23.456 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.456 Verification LBA range: start 0x0 length 0x400 00:20:23.456 Nvme9n1 : 1.19 272.27 17.02 0.00 0.00 202201.40 4041.39 251658.24 00:20:23.456 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:23.456 Verification LBA range: start 0x0 length 0x400 00:20:23.456 Nvme10n1 : 1.20 266.36 16.65 0.00 0.00 203749.89 7864.32 256901.12 00:20:23.456 =================================================================================================================== 00:20:23.456 Total : 2455.73 153.48 0.00 0.00 236438.49 3031.04 281367.89 00:20:23.456 00:04:53 -- target/shutdown.sh@94 -- # stoptarget 00:20:23.456 00:04:53 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:23.456 00:04:53 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:23.456 00:04:53 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:23.457 00:04:53 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:23.457 00:04:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:23.457 00:04:53 -- nvmf/common.sh@117 -- # sync 00:20:23.457 00:04:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:23.457 00:04:53 -- nvmf/common.sh@120 -- # set +e 00:20:23.457 00:04:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.457 00:04:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:23.457 rmmod nvme_tcp 00:20:23.457 rmmod nvme_fabrics 00:20:23.457 rmmod nvme_keyring 00:20:23.719 00:04:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.719 00:04:53 -- nvmf/common.sh@124 -- # set -e 00:20:23.719 00:04:53 -- nvmf/common.sh@125 -- # return 0 00:20:23.719 00:04:53 -- nvmf/common.sh@478 -- # '[' -n 445993 ']' 00:20:23.719 00:04:53 -- nvmf/common.sh@479 -- # killprocess 445993 00:20:23.719 00:04:53 -- common/autotest_common.sh@936 -- # '[' -z 445993 ']' 00:20:23.719 00:04:53 -- common/autotest_common.sh@940 -- # kill -0 445993 00:20:23.719 00:04:53 -- common/autotest_common.sh@941 -- # uname 00:20:23.719 00:04:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:23.719 00:04:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 445993 00:20:23.719 00:04:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:23.719 00:04:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:23.719 00:04:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 445993' 00:20:23.719 killing process with pid 445993 00:20:23.719 00:04:53 -- common/autotest_common.sh@955 -- # kill 445993 00:20:23.719 00:04:53 -- common/autotest_common.sh@960 -- # wait 445993 00:20:23.980 00:04:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:23.980 00:04:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:23.980 00:04:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:23.980 00:04:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.980 00:04:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:23.980 00:04:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.980 00:04:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.980 00:04:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.896 00:04:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:25.896 00:20:25.896 real 0m16.363s 00:20:25.896 user 0m33.847s 00:20:25.896 sys 0m6.338s 00:20:25.896 00:04:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:25.896 00:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:25.896 ************************************ 00:20:25.896 END TEST nvmf_shutdown_tc1 00:20:25.896 ************************************ 00:20:25.896 00:04:56 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:25.896 00:04:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:25.896 00:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:25.896 00:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:26.156 ************************************ 00:20:26.156 START TEST nvmf_shutdown_tc2 00:20:26.156 ************************************ 00:20:26.156 00:04:56 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:20:26.156 00:04:56 -- target/shutdown.sh@99 -- # starttarget 00:20:26.156 00:04:56 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:26.156 00:04:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:26.156 00:04:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.156 00:04:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:26.156 00:04:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:26.156 00:04:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:26.156 00:04:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.156 00:04:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.156 00:04:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.156 00:04:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:26.156 00:04:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.156 00:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:26.156 00:04:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:26.156 00:04:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.156 00:04:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.156 00:04:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.156 00:04:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.156 00:04:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.156 00:04:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.156 00:04:56 -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.156 00:04:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.156 00:04:56 -- nvmf/common.sh@296 -- # e810=() 00:20:26.156 00:04:56 -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.156 00:04:56 -- nvmf/common.sh@297 -- # x722=() 00:20:26.156 00:04:56 -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.156 00:04:56 -- nvmf/common.sh@298 -- # mlx=() 00:20:26.156 00:04:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.156 00:04:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.156 00:04:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.156 00:04:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.156 00:04:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.156 00:04:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.156 00:04:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.156 00:04:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.156 00:04:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.156 00:04:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.156 00:04:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.156 00:04:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.156 00:04:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.156 00:04:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.156 00:04:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.156 00:04:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.156 00:04:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:26.156 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:26.156 00:04:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.156 00:04:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:26.156 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:26.156 00:04:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.156 00:04:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.156 00:04:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.156 00:04:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:26.156 00:04:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.156 00:04:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:26.156 Found net devices under 0000:31:00.0: cvl_0_0 00:20:26.156 00:04:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.156 00:04:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.156 00:04:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.156 00:04:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:26.156 00:04:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.156 00:04:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:26.156 Found net devices under 0000:31:00.1: cvl_0_1 00:20:26.156 00:04:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.156 00:04:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:26.156 00:04:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:26.156 00:04:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:26.156 00:04:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:26.156 00:04:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.156 00:04:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.156 00:04:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.156 00:04:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.156 00:04:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.156 00:04:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.156 00:04:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.156 00:04:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.157 00:04:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.157 00:04:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.157 00:04:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.157 00:04:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.157 00:04:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.157 00:04:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.417 00:04:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.417 00:04:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.417 00:04:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.417 00:04:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.417 00:04:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.417 00:04:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:20:26.417 00:20:26.417 --- 10.0.0.2 ping statistics --- 00:20:26.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.417 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:20:26.417 00:04:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:20:26.417 00:20:26.417 --- 10.0.0.1 ping statistics --- 00:20:26.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.417 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:20:26.417 00:04:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.417 00:04:56 -- nvmf/common.sh@411 -- # return 0 00:20:26.417 00:04:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:26.417 00:04:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.417 00:04:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:26.417 00:04:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:26.417 00:04:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.418 00:04:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:26.418 00:04:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:26.418 00:04:56 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:26.418 00:04:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:26.418 00:04:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:26.418 00:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:26.418 00:04:56 -- nvmf/common.sh@470 -- # nvmfpid=448167 00:20:26.418 00:04:56 -- nvmf/common.sh@471 -- # waitforlisten 448167 00:20:26.418 00:04:56 -- common/autotest_common.sh@817 -- # '[' -z 448167 ']' 00:20:26.418 00:04:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.418 00:04:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:26.418 00:04:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.418 00:04:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:26.418 00:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:26.418 00:04:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:26.418 [2024-04-27 00:04:56.619970] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:20:26.418 [2024-04-27 00:04:56.620026] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.677 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.677 [2024-04-27 00:04:56.688665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.677 [2024-04-27 00:04:56.760002] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.677 [2024-04-27 00:04:56.760038] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.677 [2024-04-27 00:04:56.760046] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.677 [2024-04-27 00:04:56.760052] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.677 [2024-04-27 00:04:56.760058] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.677 [2024-04-27 00:04:56.760164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.677 [2024-04-27 00:04:56.760320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.677 [2024-04-27 00:04:56.760475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.677 [2024-04-27 00:04:56.760477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:27.248 00:04:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:27.248 00:04:57 -- common/autotest_common.sh@850 -- # return 0 00:20:27.248 00:04:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:27.248 00:04:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:27.248 00:04:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.248 00:04:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.248 00:04:57 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:27.248 00:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.248 00:04:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.248 [2024-04-27 00:04:57.431285] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.248 00:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.248 00:04:57 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:27.248 00:04:57 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:27.248 00:04:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:27.248 00:04:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.248 00:04:57 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:27.248 00:04:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.248 00:04:57 -- target/shutdown.sh@28 -- # cat 00:20:27.248 00:04:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.248 00:04:57 -- target/shutdown.sh@28 -- # cat 00:20:27.248 00:04:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.248 00:04:57 -- target/shutdown.sh@28 -- # cat 00:20:27.248 00:04:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.248 00:04:57 -- target/shutdown.sh@28 -- # cat 00:20:27.248 00:04:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.248 00:04:57 -- target/shutdown.sh@28 -- # cat 00:20:27.510 00:04:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.510 00:04:57 -- target/shutdown.sh@28 -- # cat 00:20:27.510 00:04:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.510 00:04:57 -- target/shutdown.sh@28 -- # cat 00:20:27.510 00:04:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.510 00:04:57 -- target/shutdown.sh@28 -- # cat 00:20:27.510 00:04:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.510 00:04:57 -- target/shutdown.sh@28 -- # cat 00:20:27.510 00:04:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:27.510 00:04:57 -- target/shutdown.sh@28 -- # cat 00:20:27.510 00:04:57 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:27.510 00:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.510 00:04:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.510 Malloc1 00:20:27.510 [2024-04-27 00:04:57.531642] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.510 Malloc2 00:20:27.510 Malloc3 00:20:27.510 Malloc4 00:20:27.510 Malloc5 00:20:27.510 Malloc6 00:20:27.771 Malloc7 00:20:27.771 Malloc8 00:20:27.771 Malloc9 00:20:27.771 Malloc10 00:20:27.771 00:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.771 00:04:57 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:27.771 00:04:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:27.771 00:04:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.771 00:04:57 -- target/shutdown.sh@103 -- # perfpid=448552 00:20:27.771 00:04:57 -- target/shutdown.sh@104 -- # waitforlisten 448552 /var/tmp/bdevperf.sock 00:20:27.771 00:04:57 -- common/autotest_common.sh@817 -- # '[' -z 448552 ']' 00:20:27.771 00:04:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.771 00:04:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:27.771 00:04:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.771 00:04:57 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:27.771 00:04:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:27.771 00:04:57 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:27.771 00:04:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.771 00:04:57 -- nvmf/common.sh@521 -- # config=() 00:20:27.771 00:04:57 -- nvmf/common.sh@521 -- # local subsystem config 00:20:27.771 00:04:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.771 00:04:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.771 { 00:20:27.771 "params": { 00:20:27.771 "name": "Nvme$subsystem", 00:20:27.771 "trtype": "$TEST_TRANSPORT", 00:20:27.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.771 "adrfam": "ipv4", 00:20:27.771 "trsvcid": "$NVMF_PORT", 00:20:27.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.771 "hdgst": ${hdgst:-false}, 00:20:27.771 "ddgst": ${ddgst:-false} 00:20:27.771 }, 00:20:27.771 "method": "bdev_nvme_attach_controller" 00:20:27.771 } 00:20:27.771 EOF 00:20:27.771 )") 00:20:27.771 00:04:57 -- nvmf/common.sh@543 -- # cat 00:20:27.771 00:04:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.771 00:04:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.771 { 00:20:27.771 "params": { 00:20:27.771 "name": "Nvme$subsystem", 00:20:27.771 "trtype": "$TEST_TRANSPORT", 00:20:27.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.771 "adrfam": "ipv4", 00:20:27.771 "trsvcid": "$NVMF_PORT", 00:20:27.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.771 "hdgst": ${hdgst:-false}, 00:20:27.771 "ddgst": ${ddgst:-false} 00:20:27.771 }, 00:20:27.771 "method": "bdev_nvme_attach_controller" 00:20:27.771 } 00:20:27.771 EOF 00:20:27.771 )") 00:20:27.771 00:04:57 -- nvmf/common.sh@543 -- # cat 00:20:27.771 00:04:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.771 00:04:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.771 { 00:20:27.771 "params": { 00:20:27.771 "name": "Nvme$subsystem", 00:20:27.771 "trtype": "$TEST_TRANSPORT", 00:20:27.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.771 "adrfam": "ipv4", 00:20:27.771 "trsvcid": "$NVMF_PORT", 00:20:27.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.771 "hdgst": ${hdgst:-false}, 00:20:27.771 "ddgst": ${ddgst:-false} 00:20:27.771 }, 00:20:27.771 "method": "bdev_nvme_attach_controller" 00:20:27.771 } 00:20:27.771 EOF 00:20:27.771 )") 00:20:27.771 00:04:57 -- nvmf/common.sh@543 -- # cat 00:20:27.771 00:04:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.771 00:04:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.771 { 00:20:27.771 "params": { 00:20:27.771 "name": "Nvme$subsystem", 00:20:27.771 "trtype": "$TEST_TRANSPORT", 00:20:27.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.772 "adrfam": "ipv4", 00:20:27.772 "trsvcid": "$NVMF_PORT", 00:20:27.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.772 "hdgst": ${hdgst:-false}, 00:20:27.772 "ddgst": ${ddgst:-false} 00:20:27.772 }, 00:20:27.772 "method": "bdev_nvme_attach_controller" 00:20:27.772 } 00:20:27.772 EOF 00:20:27.772 )") 00:20:27.772 00:04:57 -- nvmf/common.sh@543 -- # cat 00:20:27.772 00:04:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.772 00:04:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.772 { 00:20:27.772 "params": { 00:20:27.772 "name": "Nvme$subsystem", 00:20:27.772 "trtype": "$TEST_TRANSPORT", 00:20:27.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.772 "adrfam": "ipv4", 00:20:27.772 "trsvcid": "$NVMF_PORT", 00:20:27.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.772 "hdgst": ${hdgst:-false}, 00:20:27.772 "ddgst": ${ddgst:-false} 00:20:27.772 }, 00:20:27.772 "method": "bdev_nvme_attach_controller" 00:20:27.772 } 00:20:27.772 EOF 00:20:27.772 )") 00:20:27.772 00:04:57 -- nvmf/common.sh@543 -- # cat 00:20:27.772 00:04:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:27.772 00:04:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:27.772 { 00:20:27.772 "params": { 00:20:27.772 "name": "Nvme$subsystem", 00:20:27.772 "trtype": "$TEST_TRANSPORT", 00:20:27.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.772 "adrfam": "ipv4", 00:20:27.772 "trsvcid": "$NVMF_PORT", 00:20:27.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.772 "hdgst": ${hdgst:-false}, 00:20:27.772 "ddgst": ${ddgst:-false} 00:20:27.772 }, 00:20:27.772 "method": "bdev_nvme_attach_controller" 00:20:27.772 } 00:20:27.772 EOF 00:20:27.772 )") 00:20:27.772 00:04:57 -- nvmf/common.sh@543 -- # cat 00:20:27.772 [2024-04-27 00:04:57.987983] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:20:27.772 [2024-04-27 00:04:57.988035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448552 ] 00:20:28.033 00:04:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.033 00:04:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.033 { 00:20:28.033 "params": { 00:20:28.033 "name": "Nvme$subsystem", 00:20:28.033 "trtype": "$TEST_TRANSPORT", 00:20:28.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.033 "adrfam": "ipv4", 00:20:28.033 "trsvcid": "$NVMF_PORT", 00:20:28.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.033 "hdgst": ${hdgst:-false}, 00:20:28.033 "ddgst": ${ddgst:-false} 00:20:28.033 }, 00:20:28.033 "method": "bdev_nvme_attach_controller" 00:20:28.033 } 00:20:28.033 EOF 00:20:28.033 )") 00:20:28.033 00:04:57 -- nvmf/common.sh@543 -- # cat 00:20:28.033 00:04:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.033 00:04:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.033 { 00:20:28.033 "params": { 00:20:28.033 "name": "Nvme$subsystem", 00:20:28.033 "trtype": "$TEST_TRANSPORT", 00:20:28.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.033 "adrfam": "ipv4", 00:20:28.033 "trsvcid": "$NVMF_PORT", 00:20:28.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.033 "hdgst": ${hdgst:-false}, 00:20:28.033 "ddgst": ${ddgst:-false} 00:20:28.033 }, 00:20:28.033 "method": "bdev_nvme_attach_controller" 00:20:28.033 } 00:20:28.033 EOF 00:20:28.033 )") 00:20:28.033 00:04:57 -- nvmf/common.sh@543 -- # cat 00:20:28.033 00:04:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.033 00:04:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.033 { 00:20:28.033 "params": { 00:20:28.033 "name": "Nvme$subsystem", 00:20:28.033 "trtype": "$TEST_TRANSPORT", 00:20:28.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.033 "adrfam": "ipv4", 00:20:28.033 "trsvcid": "$NVMF_PORT", 00:20:28.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.033 "hdgst": ${hdgst:-false}, 00:20:28.033 "ddgst": ${ddgst:-false} 00:20:28.033 }, 00:20:28.033 "method": "bdev_nvme_attach_controller" 00:20:28.033 } 00:20:28.033 EOF 00:20:28.033 )") 00:20:28.033 00:04:58 -- nvmf/common.sh@543 -- # cat 00:20:28.033 00:04:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.033 00:04:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.033 { 00:20:28.033 "params": { 00:20:28.033 "name": "Nvme$subsystem", 00:20:28.033 "trtype": "$TEST_TRANSPORT", 00:20:28.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.033 "adrfam": "ipv4", 00:20:28.033 "trsvcid": "$NVMF_PORT", 00:20:28.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.033 "hdgst": ${hdgst:-false}, 00:20:28.033 "ddgst": ${ddgst:-false} 00:20:28.033 }, 00:20:28.033 "method": "bdev_nvme_attach_controller" 00:20:28.033 } 00:20:28.033 EOF 00:20:28.033 )") 00:20:28.033 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.033 00:04:58 -- nvmf/common.sh@543 -- # cat 00:20:28.033 00:04:58 -- nvmf/common.sh@545 -- # jq . 00:20:28.033 00:04:58 -- nvmf/common.sh@546 -- # IFS=, 00:20:28.033 00:04:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:28.033 "params": { 00:20:28.033 "name": "Nvme1", 00:20:28.033 "trtype": "tcp", 00:20:28.033 "traddr": "10.0.0.2", 00:20:28.033 "adrfam": "ipv4", 00:20:28.033 "trsvcid": "4420", 00:20:28.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.033 "hdgst": false, 00:20:28.033 "ddgst": false 00:20:28.033 }, 00:20:28.033 "method": "bdev_nvme_attach_controller" 00:20:28.033 },{ 00:20:28.033 "params": { 00:20:28.033 "name": "Nvme2", 00:20:28.033 "trtype": "tcp", 00:20:28.033 "traddr": "10.0.0.2", 00:20:28.033 "adrfam": "ipv4", 00:20:28.033 "trsvcid": "4420", 00:20:28.033 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:28.033 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:28.033 "hdgst": false, 00:20:28.033 "ddgst": false 00:20:28.033 }, 00:20:28.033 "method": "bdev_nvme_attach_controller" 00:20:28.033 },{ 00:20:28.033 "params": { 00:20:28.033 "name": "Nvme3", 00:20:28.033 "trtype": "tcp", 00:20:28.033 "traddr": "10.0.0.2", 00:20:28.033 "adrfam": "ipv4", 00:20:28.033 "trsvcid": "4420", 00:20:28.033 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:28.033 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:28.033 "hdgst": false, 00:20:28.033 "ddgst": false 00:20:28.033 }, 00:20:28.033 "method": "bdev_nvme_attach_controller" 00:20:28.033 },{ 00:20:28.033 "params": { 00:20:28.033 "name": "Nvme4", 00:20:28.033 "trtype": "tcp", 00:20:28.033 "traddr": "10.0.0.2", 00:20:28.033 "adrfam": "ipv4", 00:20:28.033 "trsvcid": "4420", 00:20:28.033 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:28.033 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:28.033 "hdgst": false, 00:20:28.033 "ddgst": false 00:20:28.033 }, 00:20:28.033 "method": "bdev_nvme_attach_controller" 00:20:28.033 },{ 00:20:28.033 "params": { 00:20:28.033 "name": "Nvme5", 00:20:28.033 "trtype": "tcp", 00:20:28.033 "traddr": "10.0.0.2", 00:20:28.034 "adrfam": "ipv4", 00:20:28.034 "trsvcid": "4420", 00:20:28.034 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:28.034 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:28.034 "hdgst": false, 00:20:28.034 "ddgst": false 00:20:28.034 }, 00:20:28.034 "method": "bdev_nvme_attach_controller" 00:20:28.034 },{ 00:20:28.034 "params": { 00:20:28.034 "name": "Nvme6", 00:20:28.034 "trtype": "tcp", 00:20:28.034 "traddr": "10.0.0.2", 00:20:28.034 "adrfam": "ipv4", 00:20:28.034 "trsvcid": "4420", 00:20:28.034 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:28.034 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:28.034 "hdgst": false, 00:20:28.034 "ddgst": false 00:20:28.034 }, 00:20:28.034 "method": "bdev_nvme_attach_controller" 00:20:28.034 },{ 00:20:28.034 "params": { 00:20:28.034 "name": "Nvme7", 00:20:28.034 "trtype": "tcp", 00:20:28.034 "traddr": "10.0.0.2", 00:20:28.034 "adrfam": "ipv4", 00:20:28.034 "trsvcid": "4420", 00:20:28.034 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:28.034 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:28.034 "hdgst": false, 00:20:28.034 "ddgst": false 00:20:28.034 }, 00:20:28.034 "method": "bdev_nvme_attach_controller" 00:20:28.034 },{ 00:20:28.034 "params": { 00:20:28.034 "name": "Nvme8", 00:20:28.034 "trtype": "tcp", 00:20:28.034 "traddr": "10.0.0.2", 00:20:28.034 "adrfam": "ipv4", 00:20:28.034 "trsvcid": "4420", 00:20:28.034 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:28.034 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:28.034 "hdgst": false, 00:20:28.034 "ddgst": false 00:20:28.034 }, 00:20:28.034 "method": "bdev_nvme_attach_controller" 00:20:28.034 },{ 00:20:28.034 "params": { 00:20:28.034 "name": "Nvme9", 00:20:28.034 "trtype": "tcp", 00:20:28.034 "traddr": "10.0.0.2", 00:20:28.034 "adrfam": "ipv4", 00:20:28.034 "trsvcid": "4420", 00:20:28.034 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:28.034 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:28.034 "hdgst": false, 00:20:28.034 "ddgst": false 00:20:28.034 }, 00:20:28.034 "method": "bdev_nvme_attach_controller" 00:20:28.034 },{ 00:20:28.034 "params": { 00:20:28.034 "name": "Nvme10", 00:20:28.034 "trtype": "tcp", 00:20:28.034 "traddr": "10.0.0.2", 00:20:28.034 "adrfam": "ipv4", 00:20:28.034 "trsvcid": "4420", 00:20:28.034 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:28.034 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:28.034 "hdgst": false, 00:20:28.034 "ddgst": false 00:20:28.034 }, 00:20:28.034 "method": "bdev_nvme_attach_controller" 00:20:28.034 }' 00:20:28.034 [2024-04-27 00:04:58.048769] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.034 [2024-04-27 00:04:58.113629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.418 Running I/O for 10 seconds... 00:20:29.418 00:04:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:29.418 00:04:59 -- common/autotest_common.sh@850 -- # return 0 00:20:29.418 00:04:59 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:29.418 00:04:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.418 00:04:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.418 00:04:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.418 00:04:59 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:29.418 00:04:59 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:29.418 00:04:59 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:29.418 00:04:59 -- target/shutdown.sh@57 -- # local ret=1 00:20:29.418 00:04:59 -- target/shutdown.sh@58 -- # local i 00:20:29.418 00:04:59 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:29.418 00:04:59 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:29.418 00:04:59 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:29.418 00:04:59 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:29.418 00:04:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.418 00:04:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.418 00:04:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.678 00:04:59 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:29.678 00:04:59 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:29.678 00:04:59 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:29.938 00:04:59 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:29.938 00:04:59 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:29.938 00:04:59 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:29.938 00:04:59 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:29.938 00:04:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.938 00:04:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.938 00:04:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.938 00:04:59 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:29.938 00:04:59 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:29.938 00:04:59 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:30.200 00:05:00 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:30.200 00:05:00 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:30.200 00:05:00 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:30.200 00:05:00 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:30.200 00:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.200 00:05:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.200 00:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.200 00:05:00 -- target/shutdown.sh@60 -- # read_io_count=136 00:20:30.200 00:05:00 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:20:30.200 00:05:00 -- target/shutdown.sh@64 -- # ret=0 00:20:30.200 00:05:00 -- target/shutdown.sh@65 -- # break 00:20:30.200 00:05:00 -- target/shutdown.sh@69 -- # return 0 00:20:30.200 00:05:00 -- target/shutdown.sh@110 -- # killprocess 448552 00:20:30.200 00:05:00 -- common/autotest_common.sh@936 -- # '[' -z 448552 ']' 00:20:30.200 00:05:00 -- common/autotest_common.sh@940 -- # kill -0 448552 00:20:30.200 00:05:00 -- common/autotest_common.sh@941 -- # uname 00:20:30.200 00:05:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:30.200 00:05:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 448552 00:20:30.200 00:05:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:30.200 00:05:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:30.200 00:05:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 448552' 00:20:30.200 killing process with pid 448552 00:20:30.200 00:05:00 -- common/autotest_common.sh@955 -- # kill 448552 00:20:30.200 00:05:00 -- common/autotest_common.sh@960 -- # wait 448552 00:20:30.200 Received shutdown signal, test time was about 0.956069 seconds 00:20:30.200 00:20:30.200 Latency(us) 00:20:30.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.200 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.200 Verification LBA range: start 0x0 length 0x400 00:20:30.200 Nvme1n1 : 0.95 269.98 16.87 0.00 0.00 234046.08 20534.61 244667.73 00:20:30.200 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.200 Verification LBA range: start 0x0 length 0x400 00:20:30.200 Nvme2n1 : 0.95 269.29 16.83 0.00 0.00 229826.35 22937.60 248162.99 00:20:30.200 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.200 Verification LBA range: start 0x0 length 0x400 00:20:30.200 Nvme3n1 : 0.94 273.57 17.10 0.00 0.00 221362.35 9939.63 253405.87 00:20:30.200 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.200 Verification LBA range: start 0x0 length 0x400 00:20:30.200 Nvme4n1 : 0.96 268.01 16.75 0.00 0.00 221381.97 14745.60 251658.24 00:20:30.200 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.200 Verification LBA range: start 0x0 length 0x400 00:20:30.200 Nvme5n1 : 0.93 207.43 12.96 0.00 0.00 278850.56 32112.64 234181.97 00:20:30.200 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.200 Verification LBA range: start 0x0 length 0x400 00:20:30.200 Nvme6n1 : 0.92 208.62 13.04 0.00 0.00 270426.17 18131.63 232434.35 00:20:30.200 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.200 Verification LBA range: start 0x0 length 0x400 00:20:30.200 Nvme7n1 : 0.95 270.86 16.93 0.00 0.00 204267.52 19005.44 248162.99 00:20:30.200 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.200 Verification LBA range: start 0x0 length 0x400 00:20:30.200 Nvme8n1 : 0.93 206.07 12.88 0.00 0.00 261566.86 19660.80 256901.12 00:20:30.200 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.200 Verification LBA range: start 0x0 length 0x400 00:20:30.200 Nvme9n1 : 0.94 203.88 12.74 0.00 0.00 258283.24 18786.99 272629.76 00:20:30.200 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:30.200 Verification LBA range: start 0x0 length 0x400 00:20:30.200 Nvme10n1 : 0.94 204.70 12.79 0.00 0.00 250884.27 20862.29 249910.61 00:20:30.200 =================================================================================================================== 00:20:30.200 Total : 2382.41 148.90 0.00 0.00 240102.01 9939.63 272629.76 00:20:30.462 00:05:00 -- target/shutdown.sh@113 -- # sleep 1 00:20:31.405 00:05:01 -- target/shutdown.sh@114 -- # kill -0 448167 00:20:31.405 00:05:01 -- target/shutdown.sh@116 -- # stoptarget 00:20:31.405 00:05:01 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:31.405 00:05:01 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:31.405 00:05:01 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:31.405 00:05:01 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:31.405 00:05:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:31.405 00:05:01 -- nvmf/common.sh@117 -- # sync 00:20:31.405 00:05:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.405 00:05:01 -- nvmf/common.sh@120 -- # set +e 00:20:31.405 00:05:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.405 00:05:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.405 rmmod nvme_tcp 00:20:31.405 rmmod nvme_fabrics 00:20:31.405 rmmod nvme_keyring 00:20:31.666 00:05:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.666 00:05:01 -- nvmf/common.sh@124 -- # set -e 00:20:31.666 00:05:01 -- nvmf/common.sh@125 -- # return 0 00:20:31.666 00:05:01 -- nvmf/common.sh@478 -- # '[' -n 448167 ']' 00:20:31.666 00:05:01 -- nvmf/common.sh@479 -- # killprocess 448167 00:20:31.666 00:05:01 -- common/autotest_common.sh@936 -- # '[' -z 448167 ']' 00:20:31.666 00:05:01 -- common/autotest_common.sh@940 -- # kill -0 448167 00:20:31.666 00:05:01 -- common/autotest_common.sh@941 -- # uname 00:20:31.666 00:05:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:31.666 00:05:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 448167 00:20:31.666 00:05:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:31.666 00:05:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:31.666 00:05:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 448167' 00:20:31.666 killing process with pid 448167 00:20:31.666 00:05:01 -- common/autotest_common.sh@955 -- # kill 448167 00:20:31.666 00:05:01 -- common/autotest_common.sh@960 -- # wait 448167 00:20:31.928 00:05:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:31.928 00:05:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:31.928 00:05:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:31.928 00:05:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.928 00:05:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:31.928 00:05:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.928 00:05:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.928 00:05:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.940 00:05:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:33.940 00:20:33.940 real 0m7.793s 00:20:33.940 user 0m23.350s 00:20:33.940 sys 0m1.185s 00:20:33.940 00:05:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:33.940 00:05:04 -- common/autotest_common.sh@10 -- # set +x 00:20:33.940 ************************************ 00:20:33.940 END TEST nvmf_shutdown_tc2 00:20:33.940 ************************************ 00:20:33.940 00:05:04 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:33.940 00:05:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:33.940 00:05:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:33.940 00:05:04 -- common/autotest_common.sh@10 -- # set +x 00:20:34.200 ************************************ 00:20:34.200 START TEST nvmf_shutdown_tc3 00:20:34.200 ************************************ 00:20:34.200 00:05:04 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:20:34.200 00:05:04 -- target/shutdown.sh@121 -- # starttarget 00:20:34.200 00:05:04 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:34.200 00:05:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:34.200 00:05:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.200 00:05:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:34.200 00:05:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:34.200 00:05:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:34.200 00:05:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.200 00:05:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.200 00:05:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.200 00:05:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:34.200 00:05:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:34.200 00:05:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.200 00:05:04 -- common/autotest_common.sh@10 -- # set +x 00:20:34.200 00:05:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:34.200 00:05:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:34.200 00:05:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:34.200 00:05:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:34.201 00:05:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:34.201 00:05:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:34.201 00:05:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:34.201 00:05:04 -- nvmf/common.sh@295 -- # net_devs=() 00:20:34.201 00:05:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:34.201 00:05:04 -- nvmf/common.sh@296 -- # e810=() 00:20:34.201 00:05:04 -- nvmf/common.sh@296 -- # local -ga e810 00:20:34.201 00:05:04 -- nvmf/common.sh@297 -- # x722=() 00:20:34.201 00:05:04 -- nvmf/common.sh@297 -- # local -ga x722 00:20:34.201 00:05:04 -- nvmf/common.sh@298 -- # mlx=() 00:20:34.201 00:05:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:34.201 00:05:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.201 00:05:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.201 00:05:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.201 00:05:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.201 00:05:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.201 00:05:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.201 00:05:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.201 00:05:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.201 00:05:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.201 00:05:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.201 00:05:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.201 00:05:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:34.201 00:05:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:34.201 00:05:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:34.201 00:05:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.201 00:05:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:34.201 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:34.201 00:05:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.201 00:05:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:34.201 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:34.201 00:05:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:34.201 00:05:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.201 00:05:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.201 00:05:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.201 00:05:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.201 00:05:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:34.201 Found net devices under 0000:31:00.0: cvl_0_0 00:20:34.201 00:05:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.201 00:05:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.201 00:05:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.201 00:05:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.201 00:05:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.201 00:05:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:34.201 Found net devices under 0000:31:00.1: cvl_0_1 00:20:34.201 00:05:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.201 00:05:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:34.201 00:05:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:34.201 00:05:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:34.201 00:05:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:34.201 00:05:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.201 00:05:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.201 00:05:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.201 00:05:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:34.201 00:05:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.201 00:05:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.201 00:05:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:34.201 00:05:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.201 00:05:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.201 00:05:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:34.201 00:05:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:34.201 00:05:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.201 00:05:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.201 00:05:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.201 00:05:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.461 00:05:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:34.461 00:05:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.461 00:05:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.461 00:05:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.461 00:05:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:34.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:20:34.461 00:20:34.461 --- 10.0.0.2 ping statistics --- 00:20:34.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.461 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:20:34.461 00:05:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:20:34.461 00:20:34.461 --- 10.0.0.1 ping statistics --- 00:20:34.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.461 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:20:34.461 00:05:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.461 00:05:04 -- nvmf/common.sh@411 -- # return 0 00:20:34.461 00:05:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:34.461 00:05:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.461 00:05:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:34.461 00:05:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:34.461 00:05:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.461 00:05:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:34.461 00:05:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:34.461 00:05:04 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:34.461 00:05:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:34.461 00:05:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.461 00:05:04 -- common/autotest_common.sh@10 -- # set +x 00:20:34.461 00:05:04 -- nvmf/common.sh@470 -- # nvmfpid=450023 00:20:34.461 00:05:04 -- nvmf/common.sh@471 -- # waitforlisten 450023 00:20:34.461 00:05:04 -- common/autotest_common.sh@817 -- # '[' -z 450023 ']' 00:20:34.461 00:05:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:34.461 00:05:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.461 00:05:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.461 00:05:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.462 00:05:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.462 00:05:04 -- common/autotest_common.sh@10 -- # set +x 00:20:34.721 [2024-04-27 00:05:04.698140] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:20:34.721 [2024-04-27 00:05:04.698188] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.721 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.721 [2024-04-27 00:05:04.764313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.721 [2024-04-27 00:05:04.828047] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.721 [2024-04-27 00:05:04.828084] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.721 [2024-04-27 00:05:04.828092] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.721 [2024-04-27 00:05:04.828099] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.721 [2024-04-27 00:05:04.828104] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.721 [2024-04-27 00:05:04.828214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.721 [2024-04-27 00:05:04.828369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.721 [2024-04-27 00:05:04.828524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.721 [2024-04-27 00:05:04.828525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:35.292 00:05:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:35.292 00:05:05 -- common/autotest_common.sh@850 -- # return 0 00:20:35.292 00:05:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:35.292 00:05:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:35.292 00:05:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.292 00:05:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.292 00:05:05 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.292 00:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.292 00:05:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.292 [2024-04-27 00:05:05.502395] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.292 00:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.292 00:05:05 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:35.292 00:05:05 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:35.292 00:05:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:35.292 00:05:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.554 00:05:05 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:35.554 00:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.554 00:05:05 -- target/shutdown.sh@28 -- # cat 00:20:35.554 00:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.554 00:05:05 -- target/shutdown.sh@28 -- # cat 00:20:35.554 00:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.554 00:05:05 -- target/shutdown.sh@28 -- # cat 00:20:35.554 00:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.554 00:05:05 -- target/shutdown.sh@28 -- # cat 00:20:35.554 00:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.554 00:05:05 -- target/shutdown.sh@28 -- # cat 00:20:35.554 00:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.554 00:05:05 -- target/shutdown.sh@28 -- # cat 00:20:35.554 00:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.554 00:05:05 -- target/shutdown.sh@28 -- # cat 00:20:35.554 00:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.554 00:05:05 -- target/shutdown.sh@28 -- # cat 00:20:35.554 00:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.554 00:05:05 -- target/shutdown.sh@28 -- # cat 00:20:35.554 00:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.554 00:05:05 -- target/shutdown.sh@28 -- # cat 00:20:35.554 00:05:05 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:35.554 00:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.554 00:05:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.554 Malloc1 00:20:35.554 [2024-04-27 00:05:05.598740] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.554 Malloc2 00:20:35.554 Malloc3 00:20:35.554 Malloc4 00:20:35.554 Malloc5 00:20:35.554 Malloc6 00:20:35.816 Malloc7 00:20:35.816 Malloc8 00:20:35.816 Malloc9 00:20:35.816 Malloc10 00:20:35.816 00:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.816 00:05:05 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:35.816 00:05:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:35.816 00:05:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.816 00:05:06 -- target/shutdown.sh@125 -- # perfpid=450232 00:20:35.816 00:05:06 -- target/shutdown.sh@126 -- # waitforlisten 450232 /var/tmp/bdevperf.sock 00:20:35.816 00:05:06 -- common/autotest_common.sh@817 -- # '[' -z 450232 ']' 00:20:35.816 00:05:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.816 00:05:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.816 00:05:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.816 00:05:06 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:35.816 00:05:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.816 00:05:06 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:35.816 00:05:06 -- common/autotest_common.sh@10 -- # set +x 00:20:35.816 00:05:06 -- nvmf/common.sh@521 -- # config=() 00:20:35.816 00:05:06 -- nvmf/common.sh@521 -- # local subsystem config 00:20:35.816 00:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.816 00:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.816 { 00:20:35.816 "params": { 00:20:35.816 "name": "Nvme$subsystem", 00:20:35.816 "trtype": "$TEST_TRANSPORT", 00:20:35.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.816 "adrfam": "ipv4", 00:20:35.816 "trsvcid": "$NVMF_PORT", 00:20:35.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.816 "hdgst": ${hdgst:-false}, 00:20:35.816 "ddgst": ${ddgst:-false} 00:20:35.816 }, 00:20:35.816 "method": "bdev_nvme_attach_controller" 00:20:35.816 } 00:20:35.816 EOF 00:20:35.816 )") 00:20:35.816 00:05:06 -- nvmf/common.sh@543 -- # cat 00:20:35.816 00:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.816 00:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.816 { 00:20:35.816 "params": { 00:20:35.816 "name": "Nvme$subsystem", 00:20:35.816 "trtype": "$TEST_TRANSPORT", 00:20:35.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.816 "adrfam": "ipv4", 00:20:35.816 "trsvcid": "$NVMF_PORT", 00:20:35.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.816 "hdgst": ${hdgst:-false}, 00:20:35.816 "ddgst": ${ddgst:-false} 00:20:35.816 }, 00:20:35.816 "method": "bdev_nvme_attach_controller" 00:20:35.816 } 00:20:35.816 EOF 00:20:35.816 )") 00:20:35.816 00:05:06 -- nvmf/common.sh@543 -- # cat 00:20:35.816 00:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.816 00:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.816 { 00:20:35.816 "params": { 00:20:35.816 "name": "Nvme$subsystem", 00:20:35.816 "trtype": "$TEST_TRANSPORT", 00:20:35.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.816 "adrfam": "ipv4", 00:20:35.816 "trsvcid": "$NVMF_PORT", 00:20:35.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.816 "hdgst": ${hdgst:-false}, 00:20:35.816 "ddgst": ${ddgst:-false} 00:20:35.816 }, 00:20:35.816 "method": "bdev_nvme_attach_controller" 00:20:35.816 } 00:20:35.816 EOF 00:20:35.816 )") 00:20:35.816 00:05:06 -- nvmf/common.sh@543 -- # cat 00:20:35.816 00:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.816 00:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.816 { 00:20:35.816 "params": { 00:20:35.816 "name": "Nvme$subsystem", 00:20:35.816 "trtype": "$TEST_TRANSPORT", 00:20:35.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.816 "adrfam": "ipv4", 00:20:35.816 "trsvcid": "$NVMF_PORT", 00:20:35.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.816 "hdgst": ${hdgst:-false}, 00:20:35.816 "ddgst": ${ddgst:-false} 00:20:35.816 }, 00:20:35.816 "method": "bdev_nvme_attach_controller" 00:20:35.816 } 00:20:35.816 EOF 00:20:35.816 )") 00:20:35.816 00:05:06 -- nvmf/common.sh@543 -- # cat 00:20:35.816 00:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.092 00:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.092 { 00:20:36.092 "params": { 00:20:36.092 "name": "Nvme$subsystem", 00:20:36.092 "trtype": "$TEST_TRANSPORT", 00:20:36.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.092 "adrfam": "ipv4", 00:20:36.092 "trsvcid": "$NVMF_PORT", 00:20:36.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.092 "hdgst": ${hdgst:-false}, 00:20:36.092 "ddgst": ${ddgst:-false} 00:20:36.092 }, 00:20:36.092 "method": "bdev_nvme_attach_controller" 00:20:36.092 } 00:20:36.092 EOF 00:20:36.092 )") 00:20:36.092 00:05:06 -- nvmf/common.sh@543 -- # cat 00:20:36.092 00:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.092 00:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.092 { 00:20:36.092 "params": { 00:20:36.092 "name": "Nvme$subsystem", 00:20:36.092 "trtype": "$TEST_TRANSPORT", 00:20:36.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.092 "adrfam": "ipv4", 00:20:36.092 "trsvcid": "$NVMF_PORT", 00:20:36.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.092 "hdgst": ${hdgst:-false}, 00:20:36.092 "ddgst": ${ddgst:-false} 00:20:36.092 }, 00:20:36.092 "method": "bdev_nvme_attach_controller" 00:20:36.092 } 00:20:36.092 EOF 00:20:36.092 )") 00:20:36.092 00:05:06 -- nvmf/common.sh@543 -- # cat 00:20:36.092 00:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.092 00:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.092 { 00:20:36.092 "params": { 00:20:36.092 "name": "Nvme$subsystem", 00:20:36.092 "trtype": "$TEST_TRANSPORT", 00:20:36.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.092 "adrfam": "ipv4", 00:20:36.092 "trsvcid": "$NVMF_PORT", 00:20:36.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.092 "hdgst": ${hdgst:-false}, 00:20:36.092 "ddgst": ${ddgst:-false} 00:20:36.092 }, 00:20:36.092 "method": "bdev_nvme_attach_controller" 00:20:36.092 } 00:20:36.092 EOF 00:20:36.092 )") 00:20:36.092 00:05:06 -- nvmf/common.sh@543 -- # cat 00:20:36.092 [2024-04-27 00:05:06.056670] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:20:36.092 [2024-04-27 00:05:06.056720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450232 ] 00:20:36.092 00:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.092 00:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.092 { 00:20:36.092 "params": { 00:20:36.092 "name": "Nvme$subsystem", 00:20:36.092 "trtype": "$TEST_TRANSPORT", 00:20:36.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.092 "adrfam": "ipv4", 00:20:36.092 "trsvcid": "$NVMF_PORT", 00:20:36.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.092 "hdgst": ${hdgst:-false}, 00:20:36.092 "ddgst": ${ddgst:-false} 00:20:36.092 }, 00:20:36.092 "method": "bdev_nvme_attach_controller" 00:20:36.092 } 00:20:36.092 EOF 00:20:36.092 )") 00:20:36.092 00:05:06 -- nvmf/common.sh@543 -- # cat 00:20:36.092 00:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.092 00:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.092 { 00:20:36.092 "params": { 00:20:36.092 "name": "Nvme$subsystem", 00:20:36.092 "trtype": "$TEST_TRANSPORT", 00:20:36.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.092 "adrfam": "ipv4", 00:20:36.092 "trsvcid": "$NVMF_PORT", 00:20:36.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.092 "hdgst": ${hdgst:-false}, 00:20:36.092 "ddgst": ${ddgst:-false} 00:20:36.092 }, 00:20:36.092 "method": "bdev_nvme_attach_controller" 00:20:36.092 } 00:20:36.092 EOF 00:20:36.092 )") 00:20:36.092 00:05:06 -- nvmf/common.sh@543 -- # cat 00:20:36.092 00:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.092 00:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.092 { 00:20:36.092 "params": { 00:20:36.092 "name": "Nvme$subsystem", 00:20:36.092 "trtype": "$TEST_TRANSPORT", 00:20:36.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.092 "adrfam": "ipv4", 00:20:36.092 "trsvcid": "$NVMF_PORT", 00:20:36.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.092 "hdgst": ${hdgst:-false}, 00:20:36.093 "ddgst": ${ddgst:-false} 00:20:36.093 }, 00:20:36.093 "method": "bdev_nvme_attach_controller" 00:20:36.093 } 00:20:36.093 EOF 00:20:36.093 )") 00:20:36.093 00:05:06 -- nvmf/common.sh@543 -- # cat 00:20:36.093 00:05:06 -- nvmf/common.sh@545 -- # jq . 00:20:36.093 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.093 00:05:06 -- nvmf/common.sh@546 -- # IFS=, 00:20:36.093 00:05:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:36.093 "params": { 00:20:36.093 "name": "Nvme1", 00:20:36.093 "trtype": "tcp", 00:20:36.093 "traddr": "10.0.0.2", 00:20:36.093 "adrfam": "ipv4", 00:20:36.093 "trsvcid": "4420", 00:20:36.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.093 "hdgst": false, 00:20:36.093 "ddgst": false 00:20:36.093 }, 00:20:36.093 "method": "bdev_nvme_attach_controller" 00:20:36.093 },{ 00:20:36.093 "params": { 00:20:36.093 "name": "Nvme2", 00:20:36.093 "trtype": "tcp", 00:20:36.093 "traddr": "10.0.0.2", 00:20:36.093 "adrfam": "ipv4", 00:20:36.093 "trsvcid": "4420", 00:20:36.093 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.093 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.093 "hdgst": false, 00:20:36.093 "ddgst": false 00:20:36.093 }, 00:20:36.093 "method": "bdev_nvme_attach_controller" 00:20:36.093 },{ 00:20:36.093 "params": { 00:20:36.093 "name": "Nvme3", 00:20:36.093 "trtype": "tcp", 00:20:36.093 "traddr": "10.0.0.2", 00:20:36.093 "adrfam": "ipv4", 00:20:36.093 "trsvcid": "4420", 00:20:36.093 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:36.093 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:36.093 "hdgst": false, 00:20:36.093 "ddgst": false 00:20:36.093 }, 00:20:36.093 "method": "bdev_nvme_attach_controller" 00:20:36.093 },{ 00:20:36.093 "params": { 00:20:36.093 "name": "Nvme4", 00:20:36.093 "trtype": "tcp", 00:20:36.093 "traddr": "10.0.0.2", 00:20:36.093 "adrfam": "ipv4", 00:20:36.093 "trsvcid": "4420", 00:20:36.093 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:36.093 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:36.093 "hdgst": false, 00:20:36.093 "ddgst": false 00:20:36.093 }, 00:20:36.093 "method": "bdev_nvme_attach_controller" 00:20:36.093 },{ 00:20:36.093 "params": { 00:20:36.093 "name": "Nvme5", 00:20:36.093 "trtype": "tcp", 00:20:36.093 "traddr": "10.0.0.2", 00:20:36.093 "adrfam": "ipv4", 00:20:36.093 "trsvcid": "4420", 00:20:36.093 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:36.093 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:36.093 "hdgst": false, 00:20:36.093 "ddgst": false 00:20:36.093 }, 00:20:36.093 "method": "bdev_nvme_attach_controller" 00:20:36.093 },{ 00:20:36.093 "params": { 00:20:36.093 "name": "Nvme6", 00:20:36.093 "trtype": "tcp", 00:20:36.093 "traddr": "10.0.0.2", 00:20:36.093 "adrfam": "ipv4", 00:20:36.093 "trsvcid": "4420", 00:20:36.093 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:36.093 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:36.093 "hdgst": false, 00:20:36.093 "ddgst": false 00:20:36.093 }, 00:20:36.093 "method": "bdev_nvme_attach_controller" 00:20:36.093 },{ 00:20:36.093 "params": { 00:20:36.093 "name": "Nvme7", 00:20:36.093 "trtype": "tcp", 00:20:36.093 "traddr": "10.0.0.2", 00:20:36.093 "adrfam": "ipv4", 00:20:36.093 "trsvcid": "4420", 00:20:36.093 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:36.093 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:36.093 "hdgst": false, 00:20:36.093 "ddgst": false 00:20:36.093 }, 00:20:36.093 "method": "bdev_nvme_attach_controller" 00:20:36.093 },{ 00:20:36.093 "params": { 00:20:36.093 "name": "Nvme8", 00:20:36.093 "trtype": "tcp", 00:20:36.093 "traddr": "10.0.0.2", 00:20:36.093 "adrfam": "ipv4", 00:20:36.093 "trsvcid": "4420", 00:20:36.093 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:36.093 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:36.093 "hdgst": false, 00:20:36.093 "ddgst": false 00:20:36.093 }, 00:20:36.093 "method": "bdev_nvme_attach_controller" 00:20:36.093 },{ 00:20:36.093 "params": { 00:20:36.093 "name": "Nvme9", 00:20:36.093 "trtype": "tcp", 00:20:36.093 "traddr": "10.0.0.2", 00:20:36.093 "adrfam": "ipv4", 00:20:36.093 "trsvcid": "4420", 00:20:36.093 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:36.093 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:36.093 "hdgst": false, 00:20:36.093 "ddgst": false 00:20:36.093 }, 00:20:36.093 "method": "bdev_nvme_attach_controller" 00:20:36.093 },{ 00:20:36.093 "params": { 00:20:36.093 "name": "Nvme10", 00:20:36.093 "trtype": "tcp", 00:20:36.093 "traddr": "10.0.0.2", 00:20:36.093 "adrfam": "ipv4", 00:20:36.093 "trsvcid": "4420", 00:20:36.093 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:36.093 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:36.093 "hdgst": false, 00:20:36.093 "ddgst": false 00:20:36.093 }, 00:20:36.093 "method": "bdev_nvme_attach_controller" 00:20:36.093 }' 00:20:36.093 [2024-04-27 00:05:06.117975] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.093 [2024-04-27 00:05:06.182263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.014 Running I/O for 10 seconds... 00:20:38.014 00:05:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:38.014 00:05:07 -- common/autotest_common.sh@850 -- # return 0 00:20:38.014 00:05:07 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:38.014 00:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.014 00:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:38.014 00:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.014 00:05:07 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:38.014 00:05:07 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:38.014 00:05:07 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:38.014 00:05:07 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:38.014 00:05:07 -- target/shutdown.sh@57 -- # local ret=1 00:20:38.014 00:05:07 -- target/shutdown.sh@58 -- # local i 00:20:38.014 00:05:07 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:38.014 00:05:07 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:38.014 00:05:07 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.014 00:05:07 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.014 00:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.014 00:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:38.014 00:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.014 00:05:07 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:38.014 00:05:07 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:38.014 00:05:07 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:38.275 00:05:08 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:38.275 00:05:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:38.275 00:05:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.275 00:05:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.275 00:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.275 00:05:08 -- common/autotest_common.sh@10 -- # set +x 00:20:38.275 00:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.275 00:05:08 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:38.275 00:05:08 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:38.275 00:05:08 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:38.552 00:05:08 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:38.552 00:05:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:38.552 00:05:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.552 00:05:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.552 00:05:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.552 00:05:08 -- common/autotest_common.sh@10 -- # set +x 00:20:38.552 00:05:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.552 00:05:08 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:38.552 00:05:08 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:38.552 00:05:08 -- target/shutdown.sh@64 -- # ret=0 00:20:38.552 00:05:08 -- target/shutdown.sh@65 -- # break 00:20:38.552 00:05:08 -- target/shutdown.sh@69 -- # return 0 00:20:38.552 00:05:08 -- target/shutdown.sh@135 -- # killprocess 450023 00:20:38.552 00:05:08 -- common/autotest_common.sh@936 -- # '[' -z 450023 ']' 00:20:38.552 00:05:08 -- common/autotest_common.sh@940 -- # kill -0 450023 00:20:38.552 00:05:08 -- common/autotest_common.sh@941 -- # uname 00:20:38.552 00:05:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:38.552 00:05:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 450023 00:20:38.552 00:05:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:38.552 00:05:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:38.552 00:05:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 450023' 00:20:38.552 killing process with pid 450023 00:20:38.552 00:05:08 -- common/autotest_common.sh@955 -- # kill 450023 00:20:38.552 00:05:08 -- common/autotest_common.sh@960 -- # wait 450023 00:20:38.552 [2024-04-27 00:05:08.653231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.552 [2024-04-27 00:05:08.653588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.552 [2024-04-27 00:05:08.653596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653652] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with [2024-04-27 00:05:08.653684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:38.553 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653696] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653703] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653709] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653714] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653720] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653725] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653731] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653736] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653741] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with [2024-04-27 00:05:08.653740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:38.553 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653748] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1[2024-04-27 00:05:08.653753] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653760] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653765] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653774] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653779] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with [2024-04-27 00:05:08.653779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:38.553 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653786] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653791] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with [2024-04-27 00:05:08.653791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1the state(5) to be set 00:20:38.553 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653798] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653803] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653808] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653813] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-27 00:05:08.653819] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653825] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653830] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653835] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653843] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653849] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653853] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653859] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653863] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653868] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653873] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653879] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653884] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653889] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653894] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653899] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653905] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653909] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653915] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.553 [2024-04-27 00:05:08.653920] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653925] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with [2024-04-27 00:05:08.653925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1the state(5) to be set 00:20:38.553 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.553 [2024-04-27 00:05:08.653931] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.553 [2024-04-27 00:05:08.653934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.653937] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.653942] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.653943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.653947] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.653952] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with [2024-04-27 00:05:08.653951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:38.554 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.653960] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.653964] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.653964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.653970] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.653973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.653975] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.653980] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.653983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.653985] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.653990] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.653991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.653995] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.654000] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.654000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654004] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.654008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-27 00:05:08.654009] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.654015] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.654019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1[2024-04-27 00:05:08.654020] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.654027] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.654028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654032] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb950 is same with the state(5) to be set 00:20:38.554 [2024-04-27 00:05:08.654038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.554 [2024-04-27 00:05:08.654332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.554 [2024-04-27 00:05:08.654339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.555 [2024-04-27 00:05:08.654348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.555 [2024-04-27 00:05:08.654355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.555 [2024-04-27 00:05:08.654364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.555 [2024-04-27 00:05:08.654371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.555 [2024-04-27 00:05:08.654380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.555 [2024-04-27 00:05:08.654387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.555 [2024-04-27 00:05:08.654414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.555 [2024-04-27 00:05:08.654459] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x293e3e0 was disconnected and freed. reset controller. 00:20:38.555 [2024-04-27 00:05:08.656098] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656122] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656128] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656133] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656138] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656142] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656151] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656156] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656160] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656165] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656169] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656174] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656179] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656184] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656188] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656193] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656198] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656202] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656207] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656212] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656216] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656221] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656225] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656230] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656235] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656239] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656244] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656249] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656253] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656258] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656262] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656269] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656276] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656280] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656286] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656291] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656295] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656300] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656306] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656313] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656318] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656322] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656326] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656331] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656336] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656340] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656345] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656349] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656353] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656358] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656362] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656367] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656371] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656375] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656380] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656384] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656389] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656393] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656398] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656402] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656407] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656412] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.656416] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9960 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.657232] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f9df0 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.657745] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.657765] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.657771] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.657776] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.555 [2024-04-27 00:05:08.657781] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657786] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657790] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657795] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657800] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657804] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657809] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657814] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657818] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657822] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657827] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657832] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657840] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657845] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657849] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657854] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657859] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657863] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657868] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657873] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657887] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657892] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657897] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657901] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657906] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657911] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657915] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657919] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657924] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657929] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657933] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657938] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657942] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657947] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657951] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657956] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657960] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657965] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657969] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657974] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657978] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657983] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657987] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657992] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.657996] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658001] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658006] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658011] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658016] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658020] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658025] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658030] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658034] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658039] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658044] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658049] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658054] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658058] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658063] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa280 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658645] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658659] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658664] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658668] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658673] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658678] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658683] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658687] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658692] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658696] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658701] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658705] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658710] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658714] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658718] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658725] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658730] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658735] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658739] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658743] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.556 [2024-04-27 00:05:08.658748] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658752] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658757] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658761] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658766] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658770] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658775] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658781] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658786] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658791] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658796] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658800] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658805] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658809] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658814] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658818] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658823] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658827] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658832] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658839] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658844] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658849] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658855] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658859] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658864] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658868] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658873] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658877] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658881] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658886] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658891] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658895] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658899] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658904] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658908] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658912] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658917] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658921] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658925] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658930] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658934] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658939] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.658943] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa710 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.659584] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12faba0 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.659599] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12faba0 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.659604] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12faba0 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660020] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660037] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660044] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660055] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660062] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660069] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660075] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660082] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660089] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660095] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660101] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.557 [2024-04-27 00:05:08.660107] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660114] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660120] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660126] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660133] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660139] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660146] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660152] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660159] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660165] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660172] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660178] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660185] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660191] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660197] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660203] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660210] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660216] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660223] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660231] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660237] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660244] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660250] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660257] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660263] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660269] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660275] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660282] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660288] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660295] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660301] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660307] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660313] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660320] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660326] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660333] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660339] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660346] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660352] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660358] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660364] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660371] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660377] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660384] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660390] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660396] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660403] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660410] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660416] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660423] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660429] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660436] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb030 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.660994] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb4c0 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.661007] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb4c0 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.661012] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb4c0 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.661017] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb4c0 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.661022] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb4c0 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.661026] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb4c0 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.661031] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb4c0 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.661036] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb4c0 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.661041] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb4c0 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.661045] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fb4c0 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.669616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.558 [2024-04-27 00:05:08.669639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.558 [2024-04-27 00:05:08.669648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.558 [2024-04-27 00:05:08.669655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.558 [2024-04-27 00:05:08.669663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.558 [2024-04-27 00:05:08.669671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.558 [2024-04-27 00:05:08.669679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.558 [2024-04-27 00:05:08.669686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.558 [2024-04-27 00:05:08.669694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2804e40 is same with the state(5) to be set 00:20:38.558 [2024-04-27 00:05:08.669723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.558 [2024-04-27 00:05:08.669732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.558 [2024-04-27 00:05:08.669744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.669752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.669760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.669767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.669775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.669783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.669790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28713d0 is same with the state(5) to be set 00:20:38.559 [2024-04-27 00:05:08.669816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.669824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.669832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.669852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.669861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.669868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.669877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.669884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.669891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2849b80 is same with the state(5) to be set 00:20:38.559 [2024-04-27 00:05:08.669916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.669925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.669933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.669940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.669948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.669955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.669963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.669970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.669977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b2e30 is same with the state(5) to be set 00:20:38.559 [2024-04-27 00:05:08.670002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28310d0 is same with the state(5) to be set 00:20:38.559 [2024-04-27 00:05:08.670088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282d920 is same with the state(5) to be set 00:20:38.559 [2024-04-27 00:05:08.670170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282dd80 is same with the state(5) to be set 00:20:38.559 [2024-04-27 00:05:08.670257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29cad60 is same with the state(5) to be set 00:20:38.559 [2024-04-27 00:05:08.670341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235afd0 is same with the state(5) to be set 00:20:38.559 [2024-04-27 00:05:08.670422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.559 [2024-04-27 00:05:08.670438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.559 [2024-04-27 00:05:08.670445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.670453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.560 [2024-04-27 00:05:08.670460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.670468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.560 [2024-04-27 00:05:08.670477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.670484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28020d0 is same with the state(5) to be set 00:20:38.560 [2024-04-27 00:05:08.672162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.560 [2024-04-27 00:05:08.672717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.560 [2024-04-27 00:05:08.672724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.672989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.672998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.561 [2024-04-27 00:05:08.673286] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2942020 was disconnected and freed. reset controller. 00:20:38.561 [2024-04-27 00:05:08.673853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.561 [2024-04-27 00:05:08.673885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.561 [2024-04-27 00:05:08.673892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.673902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.673910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.673919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.673926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.673936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.673943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.673952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.673959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.673968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.673975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.673984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.673991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.674339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.562 [2024-04-27 00:05:08.674348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.562 [2024-04-27 00:05:08.681076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.563 [2024-04-27 00:05:08.681781] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27fcd20 was disconnected and freed. reset controller. 00:20:38.563 [2024-04-27 00:05:08.681918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.563 [2024-04-27 00:05:08.681930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.563 [2024-04-27 00:05:08.681943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.681951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.681961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.681968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.681978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.681984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.681994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.564 [2024-04-27 00:05:08.682509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.564 [2024-04-27 00:05:08.682518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.682967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.565 [2024-04-27 00:05:08.682974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.565 [2024-04-27 00:05:08.683027] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2947110 was disconnected and freed. reset controller. 00:20:38.565 [2024-04-27 00:05:08.683092] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.565 [2024-04-27 00:05:08.683121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28020d0 (9): Bad file descriptor 00:20:38.565 [2024-04-27 00:05:08.683151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2804e40 (9): Bad file descriptor 00:20:38.565 [2024-04-27 00:05:08.683164] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28713d0 (9): Bad file descriptor 00:20:38.565 [2024-04-27 00:05:08.683178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2849b80 (9): Bad file descriptor 00:20:38.565 [2024-04-27 00:05:08.683190] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28b2e30 (9): Bad file descriptor 00:20:38.565 [2024-04-27 00:05:08.683202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28310d0 (9): Bad file descriptor 00:20:38.565 [2024-04-27 00:05:08.683214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282d920 (9): Bad file descriptor 00:20:38.565 [2024-04-27 00:05:08.683229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282dd80 (9): Bad file descriptor 00:20:38.565 [2024-04-27 00:05:08.683245] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29cad60 (9): Bad file descriptor 00:20:38.565 [2024-04-27 00:05:08.683263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235afd0 (9): Bad file descriptor 00:20:38.565 [2024-04-27 00:05:08.688978] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x293f680 was disconnected and freed. reset controller. 00:20:38.565 [2024-04-27 00:05:08.689236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:38.565 [2024-04-27 00:05:08.689643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.565 [2024-04-27 00:05:08.689877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.565 [2024-04-27 00:05:08.689899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28020d0 with addr=10.0.0.2, port=4420 00:20:38.565 [2024-04-27 00:05:08.689909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28020d0 is same with the state(5) to be set 00:20:38.566 [2024-04-27 00:05:08.691186] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.566 [2024-04-27 00:05:08.691210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:38.566 [2024-04-27 00:05:08.691220] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:38.566 [2024-04-27 00:05:08.691230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:38.566 [2024-04-27 00:05:08.691608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.566 [2024-04-27 00:05:08.692089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.566 [2024-04-27 00:05:08.692127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28310d0 with addr=10.0.0.2, port=4420 00:20:38.566 [2024-04-27 00:05:08.692138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28310d0 is same with the state(5) to be set 00:20:38.566 [2024-04-27 00:05:08.692154] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28020d0 (9): Bad file descriptor 00:20:38.566 [2024-04-27 00:05:08.692231] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.566 [2024-04-27 00:05:08.692281] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.566 [2024-04-27 00:05:08.692318] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.566 [2024-04-27 00:05:08.692362] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:38.566 [2024-04-27 00:05:08.692708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.566 [2024-04-27 00:05:08.693198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.566 [2024-04-27 00:05:08.693236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28713d0 with addr=10.0.0.2, port=4420 00:20:38.566 [2024-04-27 00:05:08.693247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28713d0 is same with the state(5) to be set 00:20:38.566 [2024-04-27 00:05:08.693479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.566 [2024-04-27 00:05:08.694088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.566 [2024-04-27 00:05:08.694125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2804e40 with addr=10.0.0.2, port=4420 00:20:38.566 [2024-04-27 00:05:08.694137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2804e40 is same with the state(5) to be set 00:20:38.566 [2024-04-27 00:05:08.694497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.566 [2024-04-27 00:05:08.694881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.566 [2024-04-27 00:05:08.694891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235afd0 with addr=10.0.0.2, port=4420 00:20:38.566 [2024-04-27 00:05:08.694899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235afd0 is same with the state(5) to be set 00:20:38.566 [2024-04-27 00:05:08.694910] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28310d0 (9): Bad file descriptor 00:20:38.566 [2024-04-27 00:05:08.694926] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.566 [2024-04-27 00:05:08.694933] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.566 [2024-04-27 00:05:08.694941] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.566 [2024-04-27 00:05:08.695031] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.566 [2024-04-27 00:05:08.695043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28713d0 (9): Bad file descriptor 00:20:38.566 [2024-04-27 00:05:08.695052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2804e40 (9): Bad file descriptor 00:20:38.566 [2024-04-27 00:05:08.695061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235afd0 (9): Bad file descriptor 00:20:38.566 [2024-04-27 00:05:08.695069] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:38.566 [2024-04-27 00:05:08.695075] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:38.566 [2024-04-27 00:05:08.695082] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:38.566 [2024-04-27 00:05:08.695161] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.566 [2024-04-27 00:05:08.695189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:38.566 [2024-04-27 00:05:08.695195] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:38.566 [2024-04-27 00:05:08.695202] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:38.566 [2024-04-27 00:05:08.695213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:38.566 [2024-04-27 00:05:08.695219] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:38.566 [2024-04-27 00:05:08.695225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:38.566 [2024-04-27 00:05:08.695236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:38.566 [2024-04-27 00:05:08.695243] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:38.566 [2024-04-27 00:05:08.695249] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:38.566 [2024-04-27 00:05:08.695290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.566 [2024-04-27 00:05:08.695512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.566 [2024-04-27 00:05:08.695522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.567 [2024-04-27 00:05:08.695924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.567 [2024-04-27 00:05:08.695931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.695940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.695947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.695957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.695963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.695973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.695981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.695990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.695999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.696367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.696376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2940ae0 is same with the state(5) to be set 00:20:38.568 [2024-04-27 00:05:08.697655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.697669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.697681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.697691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.697704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.697713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.697725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.697734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.697744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.697752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.697761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.697768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.697777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.697784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.697794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.697801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.697811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.568 [2024-04-27 00:05:08.697818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.568 [2024-04-27 00:05:08.697827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.697834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.697849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.697856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.697866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.697873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.697882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.697889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.697899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.697906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.697915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.697922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.697933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.697940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.697949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.697956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.697965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.697972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.697982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.697989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.697998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.569 [2024-04-27 00:05:08.698418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.569 [2024-04-27 00:05:08.698427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.698727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.698735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29434d0 is same with the state(5) to be set 00:20:38.570 [2024-04-27 00:05:08.699991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.570 [2024-04-27 00:05:08.700248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.570 [2024-04-27 00:05:08.700256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.571 [2024-04-27 00:05:08.700844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.571 [2024-04-27 00:05:08.700852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.700861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.700868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.700878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.700885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.700894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.700902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.700910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.700917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.700926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.700934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.700942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.700950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.700958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.700966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.700975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.700981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.700991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.700997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.701007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.701014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.701023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.701030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.701041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.701048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.701056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27fb870 is same with the state(5) to be set 00:20:38.572 [2024-04-27 00:05:08.702318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.572 [2024-04-27 00:05:08.702556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.572 [2024-04-27 00:05:08.702563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.702990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.702997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.703014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.703031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.703047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.703064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.703081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.703097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.703113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.703129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.703149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.703166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.573 [2024-04-27 00:05:08.703182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.573 [2024-04-27 00:05:08.703191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.703393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.703401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27fe1d0 is same with the state(5) to be set 00:20:38.574 [2024-04-27 00:05:08.704911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.704930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.704941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.704949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.704958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.704965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.704975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.704982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.704992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.704999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.574 [2024-04-27 00:05:08.705232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.574 [2024-04-27 00:05:08.705241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.575 [2024-04-27 00:05:08.705828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.575 [2024-04-27 00:05:08.705842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.576 [2024-04-27 00:05:08.705849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.576 [2024-04-27 00:05:08.705859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.576 [2024-04-27 00:05:08.705866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.576 [2024-04-27 00:05:08.705875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.576 [2024-04-27 00:05:08.705883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.576 [2024-04-27 00:05:08.705891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.576 [2024-04-27 00:05:08.705899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.576 [2024-04-27 00:05:08.705908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.576 [2024-04-27 00:05:08.705917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.576 [2024-04-27 00:05:08.705927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.576 [2024-04-27 00:05:08.705934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.576 [2024-04-27 00:05:08.705943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.576 [2024-04-27 00:05:08.705950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.576 [2024-04-27 00:05:08.705959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.576 [2024-04-27 00:05:08.705967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.576 [2024-04-27 00:05:08.705976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.576 [2024-04-27 00:05:08.705983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.576 [2024-04-27 00:05:08.705992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29485d0 is same with the state(5) to be set 00:20:38.576 [2024-04-27 00:05:08.707756] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.576 [2024-04-27 00:05:08.707773] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.576 [2024-04-27 00:05:08.707780] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.576 [2024-04-27 00:05:08.707789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:38.576 [2024-04-27 00:05:08.707799] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:38.576 [2024-04-27 00:05:08.707874] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.576 [2024-04-27 00:05:08.707888] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.576 [2024-04-27 00:05:08.707903] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.576 [2024-04-27 00:05:08.707965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:38.576 [2024-04-27 00:05:08.707976] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:38.576 task offset: 25216 on job bdev=Nvme1n1 fails 00:20:38.576 00:20:38.576 Latency(us) 00:20:38.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.576 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.576 Job: Nvme1n1 ended in about 0.93 seconds with error 00:20:38.576 Verification LBA range: start 0x0 length 0x400 00:20:38.576 Nvme1n1 : 0.93 207.02 12.94 69.01 0.00 229179.09 17585.49 249910.61 00:20:38.576 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.576 Verification LBA range: start 0x0 length 0x400 00:20:38.576 Nvme2n1 : 0.94 204.13 12.76 0.00 0.00 303379.34 38229.33 277872.64 00:20:38.576 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.576 Job: Nvme3n1 ended in about 0.95 seconds with error 00:20:38.576 Verification LBA range: start 0x0 length 0x400 00:20:38.576 Nvme3n1 : 0.95 201.40 12.59 67.13 0.00 225869.97 13216.43 256901.12 00:20:38.576 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.576 Job: Nvme4n1 ended in about 0.94 seconds with error 00:20:38.576 Verification LBA range: start 0x0 length 0x400 00:20:38.576 Nvme4n1 : 0.94 203.85 12.74 67.95 0.00 218218.24 13052.59 246415.36 00:20:38.576 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.576 Job: Nvme5n1 ended in about 0.96 seconds with error 00:20:38.576 Verification LBA range: start 0x0 length 0x400 00:20:38.576 Nvme5n1 : 0.96 200.91 12.56 66.97 0.00 216795.73 20206.93 241172.48 00:20:38.576 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.576 Job: Nvme6n1 ended in about 0.96 seconds with error 00:20:38.576 Verification LBA range: start 0x0 length 0x400 00:20:38.576 Nvme6n1 : 0.96 133.61 8.35 66.81 0.00 283519.72 23046.83 269134.51 00:20:38.576 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.576 Job: Nvme7n1 ended in about 0.94 seconds with error 00:20:38.576 Verification LBA range: start 0x0 length 0x400 00:20:38.576 Nvme7n1 : 0.94 203.58 12.72 67.86 0.00 203977.28 12615.68 204472.32 00:20:38.576 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.576 Job: Nvme8n1 ended in about 0.96 seconds with error 00:20:38.576 Verification LBA range: start 0x0 length 0x400 00:20:38.576 Nvme8n1 : 0.96 133.29 8.33 66.64 0.00 271428.84 14199.47 244667.73 00:20:38.576 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.576 Job: Nvme9n1 ended in about 0.94 seconds with error 00:20:38.576 Verification LBA range: start 0x0 length 0x400 00:20:38.576 Nvme9n1 : 0.94 203.33 12.71 67.78 0.00 194761.60 18240.85 220200.96 00:20:38.576 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.576 Job: Nvme10n1 ended in about 0.96 seconds with error 00:20:38.576 Verification LBA range: start 0x0 length 0x400 00:20:38.576 Nvme10n1 : 0.96 132.93 8.31 66.47 0.00 259652.27 13981.01 269134.51 00:20:38.576 =================================================================================================================== 00:20:38.576 Total : 1824.05 114.00 606.61 0.00 236365.23 12615.68 277872.64 00:20:38.576 [2024-04-27 00:05:08.732070] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:38.576 [2024-04-27 00:05:08.732100] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:38.576 [2024-04-27 00:05:08.732427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.576 [2024-04-27 00:05:08.732664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.576 [2024-04-27 00:05:08.732674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x282d920 with addr=10.0.0.2, port=4420 00:20:38.576 [2024-04-27 00:05:08.732684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282d920 is same with the state(5) to be set 00:20:38.576 [2024-04-27 00:05:08.733029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.576 [2024-04-27 00:05:08.733312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.576 [2024-04-27 00:05:08.733322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x282dd80 with addr=10.0.0.2, port=4420 00:20:38.576 [2024-04-27 00:05:08.733330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282dd80 is same with the state(5) to be set 00:20:38.576 [2024-04-27 00:05:08.734731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:38.576 [2024-04-27 00:05:08.734748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:38.576 [2024-04-27 00:05:08.734758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:38.576 [2024-04-27 00:05:08.735018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.576 [2024-04-27 00:05:08.735354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.576 [2024-04-27 00:05:08.735367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29cad60 with addr=10.0.0.2, port=4420 00:20:38.576 [2024-04-27 00:05:08.735375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29cad60 is same with the state(5) to be set 00:20:38.576 [2024-04-27 00:05:08.735590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.576 [2024-04-27 00:05:08.735800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.576 [2024-04-27 00:05:08.735810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2849b80 with addr=10.0.0.2, port=4420 00:20:38.576 [2024-04-27 00:05:08.735818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2849b80 is same with the state(5) to be set 00:20:38.577 [2024-04-27 00:05:08.736036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.736254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.736264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28b2e30 with addr=10.0.0.2, port=4420 00:20:38.577 [2024-04-27 00:05:08.736271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b2e30 is same with the state(5) to be set 00:20:38.577 [2024-04-27 00:05:08.736283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282d920 (9): Bad file descriptor 00:20:38.577 [2024-04-27 00:05:08.736294] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282dd80 (9): Bad file descriptor 00:20:38.577 [2024-04-27 00:05:08.736327] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.577 [2024-04-27 00:05:08.736339] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.577 [2024-04-27 00:05:08.736353] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.577 [2024-04-27 00:05:08.736363] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:38.577 [2024-04-27 00:05:08.736419] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:38.577 [2024-04-27 00:05:08.736428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:38.577 [2024-04-27 00:05:08.736830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.737030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.737045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28020d0 with addr=10.0.0.2, port=4420 00:20:38.577 [2024-04-27 00:05:08.737053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28020d0 is same with the state(5) to be set 00:20:38.577 [2024-04-27 00:05:08.737261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.737588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.737597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28310d0 with addr=10.0.0.2, port=4420 00:20:38.577 [2024-04-27 00:05:08.737604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28310d0 is same with the state(5) to be set 00:20:38.577 [2024-04-27 00:05:08.737962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.738307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.738317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235afd0 with addr=10.0.0.2, port=4420 00:20:38.577 [2024-04-27 00:05:08.738324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235afd0 is same with the state(5) to be set 00:20:38.577 [2024-04-27 00:05:08.738333] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29cad60 (9): Bad file descriptor 00:20:38.577 [2024-04-27 00:05:08.738346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2849b80 (9): Bad file descriptor 00:20:38.577 [2024-04-27 00:05:08.738355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28b2e30 (9): Bad file descriptor 00:20:38.577 [2024-04-27 00:05:08.738364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:38.577 [2024-04-27 00:05:08.738370] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:38.577 [2024-04-27 00:05:08.738379] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:38.577 [2024-04-27 00:05:08.738391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:38.577 [2024-04-27 00:05:08.738397] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:38.577 [2024-04-27 00:05:08.738404] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:38.577 [2024-04-27 00:05:08.738469] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.577 [2024-04-27 00:05:08.738477] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.577 [2024-04-27 00:05:08.738692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.738916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.738926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2804e40 with addr=10.0.0.2, port=4420 00:20:38.577 [2024-04-27 00:05:08.738933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2804e40 is same with the state(5) to be set 00:20:38.577 [2024-04-27 00:05:08.739155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.739338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.577 [2024-04-27 00:05:08.739349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28713d0 with addr=10.0.0.2, port=4420 00:20:38.577 [2024-04-27 00:05:08.739357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28713d0 is same with the state(5) to be set 00:20:38.577 [2024-04-27 00:05:08.739365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28020d0 (9): Bad file descriptor 00:20:38.577 [2024-04-27 00:05:08.739374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28310d0 (9): Bad file descriptor 00:20:38.577 [2024-04-27 00:05:08.739383] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235afd0 (9): Bad file descriptor 00:20:38.577 [2024-04-27 00:05:08.739392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:38.577 [2024-04-27 00:05:08.739398] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:38.577 [2024-04-27 00:05:08.739405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:38.577 [2024-04-27 00:05:08.739415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:38.577 [2024-04-27 00:05:08.739421] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:38.577 [2024-04-27 00:05:08.739428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:38.577 [2024-04-27 00:05:08.739437] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:38.577 [2024-04-27 00:05:08.739444] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:38.577 [2024-04-27 00:05:08.739451] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:38.577 [2024-04-27 00:05:08.739482] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.577 [2024-04-27 00:05:08.739492] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.577 [2024-04-27 00:05:08.739498] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.577 [2024-04-27 00:05:08.739506] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2804e40 (9): Bad file descriptor 00:20:38.577 [2024-04-27 00:05:08.739515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28713d0 (9): Bad file descriptor 00:20:38.577 [2024-04-27 00:05:08.739523] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.577 [2024-04-27 00:05:08.739529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:38.577 [2024-04-27 00:05:08.739535] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.577 [2024-04-27 00:05:08.739895] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:38.577 [2024-04-27 00:05:08.739904] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:38.578 [2024-04-27 00:05:08.739912] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:38.578 [2024-04-27 00:05:08.739923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:38.578 [2024-04-27 00:05:08.739930] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:38.578 [2024-04-27 00:05:08.739936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:38.578 [2024-04-27 00:05:08.739978] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.578 [2024-04-27 00:05:08.739987] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.578 [2024-04-27 00:05:08.739993] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.578 [2024-04-27 00:05:08.739999] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:38.578 [2024-04-27 00:05:08.740005] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:38.578 [2024-04-27 00:05:08.740012] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:38.578 [2024-04-27 00:05:08.740021] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:38.578 [2024-04-27 00:05:08.740028] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:38.578 [2024-04-27 00:05:08.740035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:38.578 [2024-04-27 00:05:08.740062] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.578 [2024-04-27 00:05:08.740069] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:38.840 00:05:08 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:38.840 00:05:08 -- target/shutdown.sh@139 -- # sleep 1 00:20:39.784 00:05:09 -- target/shutdown.sh@142 -- # kill -9 450232 00:20:39.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (450232) - No such process 00:20:39.784 00:05:09 -- target/shutdown.sh@142 -- # true 00:20:39.784 00:05:09 -- target/shutdown.sh@144 -- # stoptarget 00:20:39.784 00:05:09 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:39.784 00:05:09 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:39.784 00:05:09 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:39.784 00:05:09 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:39.784 00:05:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:39.784 00:05:09 -- nvmf/common.sh@117 -- # sync 00:20:39.784 00:05:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.784 00:05:09 -- nvmf/common.sh@120 -- # set +e 00:20:39.784 00:05:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.784 00:05:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.784 rmmod nvme_tcp 00:20:39.784 rmmod nvme_fabrics 00:20:39.784 rmmod nvme_keyring 00:20:40.046 00:05:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.046 00:05:10 -- nvmf/common.sh@124 -- # set -e 00:20:40.046 00:05:10 -- nvmf/common.sh@125 -- # return 0 00:20:40.046 00:05:10 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:40.046 00:05:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:40.046 00:05:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:40.046 00:05:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:40.046 00:05:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.046 00:05:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.046 00:05:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.046 00:05:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.046 00:05:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.962 00:05:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:41.962 00:20:41.962 real 0m7.867s 00:20:41.962 user 0m19.246s 00:20:41.962 sys 0m1.247s 00:20:41.962 00:05:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:41.962 00:05:12 -- common/autotest_common.sh@10 -- # set +x 00:20:41.962 ************************************ 00:20:41.962 END TEST nvmf_shutdown_tc3 00:20:41.962 ************************************ 00:20:41.962 00:05:12 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:41.962 00:20:41.962 real 0m32.701s 00:20:41.962 user 1m16.695s 00:20:41.962 sys 0m9.159s 00:20:41.962 00:05:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:41.962 00:05:12 -- common/autotest_common.sh@10 -- # set +x 00:20:41.962 ************************************ 00:20:41.962 END TEST nvmf_shutdown 00:20:41.962 ************************************ 00:20:41.962 00:05:12 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:20:41.962 00:05:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:41.962 00:05:12 -- common/autotest_common.sh@10 -- # set +x 00:20:42.224 00:05:12 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:20:42.224 00:05:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:42.224 00:05:12 -- common/autotest_common.sh@10 -- # set +x 00:20:42.224 00:05:12 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:20:42.224 00:05:12 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:42.224 00:05:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:42.224 00:05:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.224 00:05:12 -- common/autotest_common.sh@10 -- # set +x 00:20:42.224 ************************************ 00:20:42.224 START TEST nvmf_multicontroller 00:20:42.224 ************************************ 00:20:42.224 00:05:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:42.486 * Looking for test storage... 00:20:42.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:42.486 00:05:12 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.486 00:05:12 -- nvmf/common.sh@7 -- # uname -s 00:20:42.486 00:05:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.486 00:05:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.486 00:05:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.486 00:05:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.486 00:05:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.486 00:05:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.486 00:05:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.486 00:05:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.486 00:05:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.486 00:05:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.486 00:05:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.486 00:05:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.486 00:05:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.486 00:05:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.486 00:05:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.486 00:05:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.486 00:05:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.486 00:05:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.486 00:05:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.486 00:05:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.486 00:05:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.486 00:05:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.486 00:05:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.486 00:05:12 -- paths/export.sh@5 -- # export PATH 00:20:42.486 00:05:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.486 00:05:12 -- nvmf/common.sh@47 -- # : 0 00:20:42.486 00:05:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.486 00:05:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.486 00:05:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.486 00:05:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.486 00:05:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.486 00:05:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.486 00:05:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.486 00:05:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.486 00:05:12 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:42.486 00:05:12 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:42.486 00:05:12 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:42.486 00:05:12 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:42.486 00:05:12 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.486 00:05:12 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:42.486 00:05:12 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:42.486 00:05:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:42.486 00:05:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.486 00:05:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:42.486 00:05:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:42.486 00:05:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:42.486 00:05:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.486 00:05:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.486 00:05:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.486 00:05:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:42.486 00:05:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:42.486 00:05:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.486 00:05:12 -- common/autotest_common.sh@10 -- # set +x 00:20:50.630 00:05:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:50.630 00:05:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:50.630 00:05:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:50.630 00:05:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:50.630 00:05:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:50.630 00:05:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:50.630 00:05:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:50.630 00:05:19 -- nvmf/common.sh@295 -- # net_devs=() 00:20:50.630 00:05:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:50.630 00:05:19 -- nvmf/common.sh@296 -- # e810=() 00:20:50.630 00:05:19 -- nvmf/common.sh@296 -- # local -ga e810 00:20:50.630 00:05:19 -- nvmf/common.sh@297 -- # x722=() 00:20:50.630 00:05:19 -- nvmf/common.sh@297 -- # local -ga x722 00:20:50.630 00:05:19 -- nvmf/common.sh@298 -- # mlx=() 00:20:50.630 00:05:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:50.630 00:05:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.630 00:05:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.630 00:05:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.630 00:05:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.630 00:05:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.630 00:05:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.630 00:05:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.630 00:05:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.630 00:05:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.630 00:05:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.630 00:05:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.630 00:05:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:50.630 00:05:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:50.630 00:05:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:50.630 00:05:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.630 00:05:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:50.630 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:50.630 00:05:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.630 00:05:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:50.630 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:50.630 00:05:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:50.630 00:05:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.630 00:05:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.630 00:05:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:50.630 00:05:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.630 00:05:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:50.630 Found net devices under 0000:31:00.0: cvl_0_0 00:20:50.630 00:05:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.630 00:05:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.630 00:05:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.630 00:05:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:50.630 00:05:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.630 00:05:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:50.630 Found net devices under 0000:31:00.1: cvl_0_1 00:20:50.630 00:05:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.630 00:05:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:50.630 00:05:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:50.630 00:05:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:50.630 00:05:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.630 00:05:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.630 00:05:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.630 00:05:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:50.630 00:05:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.630 00:05:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.630 00:05:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:50.630 00:05:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.630 00:05:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.630 00:05:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:50.630 00:05:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:50.630 00:05:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.630 00:05:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.630 00:05:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.630 00:05:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.630 00:05:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:50.630 00:05:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.630 00:05:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.630 00:05:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.630 00:05:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:50.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:20:50.630 00:20:50.630 --- 10.0.0.2 ping statistics --- 00:20:50.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.630 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:20:50.630 00:05:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:20:50.630 00:20:50.630 --- 10.0.0.1 ping statistics --- 00:20:50.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.630 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:20:50.630 00:05:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.630 00:05:19 -- nvmf/common.sh@411 -- # return 0 00:20:50.630 00:05:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:50.630 00:05:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.630 00:05:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:50.630 00:05:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.630 00:05:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:50.630 00:05:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:50.630 00:05:19 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:50.630 00:05:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:50.630 00:05:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:50.630 00:05:19 -- common/autotest_common.sh@10 -- # set +x 00:20:50.630 00:05:19 -- nvmf/common.sh@470 -- # nvmfpid=455230 00:20:50.630 00:05:19 -- nvmf/common.sh@471 -- # waitforlisten 455230 00:20:50.630 00:05:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:50.630 00:05:19 -- common/autotest_common.sh@817 -- # '[' -z 455230 ']' 00:20:50.630 00:05:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.630 00:05:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:50.630 00:05:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.630 00:05:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:50.630 00:05:19 -- common/autotest_common.sh@10 -- # set +x 00:20:50.630 [2024-04-27 00:05:19.801039] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:20:50.630 [2024-04-27 00:05:19.801105] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.630 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.630 [2024-04-27 00:05:19.873277] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:50.630 [2024-04-27 00:05:19.946426] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.630 [2024-04-27 00:05:19.946468] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.630 [2024-04-27 00:05:19.946476] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.630 [2024-04-27 00:05:19.946482] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.630 [2024-04-27 00:05:19.946488] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.630 [2024-04-27 00:05:19.946602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.630 [2024-04-27 00:05:19.946715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.630 [2024-04-27 00:05:19.946716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.630 00:05:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:50.630 00:05:20 -- common/autotest_common.sh@850 -- # return 0 00:20:50.630 00:05:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:50.630 00:05:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:50.630 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.630 00:05:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.630 00:05:20 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:50.630 00:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.630 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.630 [2024-04-27 00:05:20.622635] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.630 00:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.630 00:05:20 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:50.630 00:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.630 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.630 Malloc0 00:20:50.630 00:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.630 00:05:20 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:50.630 00:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.630 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.631 00:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.631 00:05:20 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:50.631 00:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.631 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.631 00:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.631 00:05:20 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:50.631 00:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.631 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.631 [2024-04-27 00:05:20.697127] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.631 00:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.631 00:05:20 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:50.631 00:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.631 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.631 [2024-04-27 00:05:20.709070] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:50.631 00:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.631 00:05:20 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:50.631 00:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.631 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.631 Malloc1 00:20:50.631 00:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.631 00:05:20 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:50.631 00:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.631 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.631 00:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.631 00:05:20 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:50.631 00:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.631 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.631 00:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.631 00:05:20 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:50.631 00:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.631 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.631 00:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.631 00:05:20 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:50.631 00:05:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.631 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.631 00:05:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.631 00:05:20 -- host/multicontroller.sh@44 -- # bdevperf_pid=455566 00:20:50.631 00:05:20 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.631 00:05:20 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:50.631 00:05:20 -- host/multicontroller.sh@47 -- # waitforlisten 455566 /var/tmp/bdevperf.sock 00:20:50.631 00:05:20 -- common/autotest_common.sh@817 -- # '[' -z 455566 ']' 00:20:50.631 00:05:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.631 00:05:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:50.631 00:05:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.631 00:05:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:50.631 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:51.571 00:05:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:51.571 00:05:21 -- common/autotest_common.sh@850 -- # return 0 00:20:51.571 00:05:21 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:51.571 00:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.571 00:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:51.833 NVMe0n1 00:20:51.833 00:05:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.833 00:05:21 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:51.833 00:05:21 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:51.833 00:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.833 00:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:51.833 00:05:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.833 1 00:20:51.833 00:05:21 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:51.833 00:05:21 -- common/autotest_common.sh@638 -- # local es=0 00:20:51.833 00:05:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:51.833 00:05:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:51.833 00:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:51.834 00:05:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:51.834 00:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:51.834 00:05:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:51.834 00:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.834 00:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:51.834 request: 00:20:51.834 { 00:20:51.834 "name": "NVMe0", 00:20:51.834 "trtype": "tcp", 00:20:51.834 "traddr": "10.0.0.2", 00:20:51.834 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:51.834 "hostaddr": "10.0.0.2", 00:20:51.834 "hostsvcid": "60000", 00:20:51.834 "adrfam": "ipv4", 00:20:51.834 "trsvcid": "4420", 00:20:51.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.834 "method": "bdev_nvme_attach_controller", 00:20:51.834 "req_id": 1 00:20:51.834 } 00:20:51.834 Got JSON-RPC error response 00:20:51.834 response: 00:20:51.834 { 00:20:51.834 "code": -114, 00:20:51.834 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:51.834 } 00:20:51.834 00:05:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:51.834 00:05:21 -- common/autotest_common.sh@641 -- # es=1 00:20:51.834 00:05:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:51.834 00:05:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:51.834 00:05:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:51.834 00:05:21 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:51.834 00:05:21 -- common/autotest_common.sh@638 -- # local es=0 00:20:51.834 00:05:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:51.834 00:05:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:51.834 00:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:51.834 00:05:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:51.834 00:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:51.834 00:05:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:51.834 00:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.834 00:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:51.834 request: 00:20:51.834 { 00:20:51.834 "name": "NVMe0", 00:20:51.834 "trtype": "tcp", 00:20:51.834 "traddr": "10.0.0.2", 00:20:51.834 "hostaddr": "10.0.0.2", 00:20:51.834 "hostsvcid": "60000", 00:20:51.834 "adrfam": "ipv4", 00:20:51.834 "trsvcid": "4420", 00:20:51.834 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:51.834 "method": "bdev_nvme_attach_controller", 00:20:51.834 "req_id": 1 00:20:51.834 } 00:20:51.834 Got JSON-RPC error response 00:20:51.834 response: 00:20:51.834 { 00:20:51.834 "code": -114, 00:20:51.834 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:51.834 } 00:20:51.834 00:05:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:51.834 00:05:21 -- common/autotest_common.sh@641 -- # es=1 00:20:51.834 00:05:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:51.834 00:05:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:51.834 00:05:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:51.834 00:05:21 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:51.834 00:05:21 -- common/autotest_common.sh@638 -- # local es=0 00:20:51.834 00:05:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:51.834 00:05:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:51.834 00:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:51.834 00:05:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:51.834 00:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:51.834 00:05:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:51.834 00:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.834 00:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:51.834 request: 00:20:51.834 { 00:20:51.834 "name": "NVMe0", 00:20:51.834 "trtype": "tcp", 00:20:51.834 "traddr": "10.0.0.2", 00:20:51.834 "hostaddr": "10.0.0.2", 00:20:51.834 "hostsvcid": "60000", 00:20:51.834 "adrfam": "ipv4", 00:20:51.834 "trsvcid": "4420", 00:20:51.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.834 "multipath": "disable", 00:20:51.834 "method": "bdev_nvme_attach_controller", 00:20:51.834 "req_id": 1 00:20:51.834 } 00:20:51.834 Got JSON-RPC error response 00:20:51.834 response: 00:20:51.834 { 00:20:51.834 "code": -114, 00:20:51.834 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:51.834 } 00:20:51.834 00:05:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:51.834 00:05:21 -- common/autotest_common.sh@641 -- # es=1 00:20:51.834 00:05:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:51.834 00:05:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:51.834 00:05:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:51.834 00:05:21 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:51.834 00:05:21 -- common/autotest_common.sh@638 -- # local es=0 00:20:51.834 00:05:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:51.834 00:05:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:51.834 00:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:51.834 00:05:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:51.834 00:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:51.834 00:05:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:51.834 00:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.834 00:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:51.834 request: 00:20:51.834 { 00:20:51.834 "name": "NVMe0", 00:20:51.834 "trtype": "tcp", 00:20:51.834 "traddr": "10.0.0.2", 00:20:51.834 "hostaddr": "10.0.0.2", 00:20:51.834 "hostsvcid": "60000", 00:20:51.834 "adrfam": "ipv4", 00:20:51.834 "trsvcid": "4420", 00:20:51.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.834 "multipath": "failover", 00:20:51.834 "method": "bdev_nvme_attach_controller", 00:20:51.834 "req_id": 1 00:20:51.834 } 00:20:51.834 Got JSON-RPC error response 00:20:51.834 response: 00:20:51.834 { 00:20:51.834 "code": -114, 00:20:51.834 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:51.834 } 00:20:51.834 00:05:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:51.834 00:05:21 -- common/autotest_common.sh@641 -- # es=1 00:20:51.834 00:05:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:51.834 00:05:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:51.834 00:05:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:51.834 00:05:21 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:51.834 00:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.834 00:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:52.096 00:20:52.096 00:05:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.096 00:05:22 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:52.096 00:05:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.097 00:05:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.097 00:05:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.097 00:05:22 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:52.097 00:05:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.097 00:05:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.097 00:20:52.097 00:05:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.097 00:05:22 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:52.097 00:05:22 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:52.097 00:05:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.097 00:05:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.097 00:05:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.097 00:05:22 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:52.097 00:05:22 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:53.483 0 00:20:53.483 00:05:23 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:53.483 00:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.483 00:05:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.483 00:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.483 00:05:23 -- host/multicontroller.sh@100 -- # killprocess 455566 00:20:53.483 00:05:23 -- common/autotest_common.sh@936 -- # '[' -z 455566 ']' 00:20:53.483 00:05:23 -- common/autotest_common.sh@940 -- # kill -0 455566 00:20:53.483 00:05:23 -- common/autotest_common.sh@941 -- # uname 00:20:53.483 00:05:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:53.483 00:05:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 455566 00:20:53.483 00:05:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:53.483 00:05:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:53.483 00:05:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 455566' 00:20:53.483 killing process with pid 455566 00:20:53.483 00:05:23 -- common/autotest_common.sh@955 -- # kill 455566 00:20:53.483 00:05:23 -- common/autotest_common.sh@960 -- # wait 455566 00:20:53.483 00:05:23 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:53.483 00:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.483 00:05:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.483 00:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.483 00:05:23 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:53.483 00:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.483 00:05:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.483 00:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.483 00:05:23 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:53.483 00:05:23 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:53.483 00:05:23 -- common/autotest_common.sh@1598 -- # read -r file 00:20:53.483 00:05:23 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:53.483 00:05:23 -- common/autotest_common.sh@1597 -- # sort -u 00:20:53.483 00:05:23 -- common/autotest_common.sh@1599 -- # cat 00:20:53.483 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:53.483 [2024-04-27 00:05:20.827958] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:20:53.483 [2024-04-27 00:05:20.828010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455566 ] 00:20:53.483 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.483 [2024-04-27 00:05:20.887546] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.483 [2024-04-27 00:05:20.951707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.483 [2024-04-27 00:05:22.206282] bdev.c:4555:bdev_name_add: *ERROR*: Bdev name b766e0c3-6da3-4a12-9aa6-1e7a330da46e already exists 00:20:53.483 [2024-04-27 00:05:22.206314] bdev.c:7672:bdev_register: *ERROR*: Unable to add uuid:b766e0c3-6da3-4a12-9aa6-1e7a330da46e alias for bdev NVMe1n1 00:20:53.483 [2024-04-27 00:05:22.206324] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:53.483 Running I/O for 1 seconds... 00:20:53.483 00:20:53.483 Latency(us) 00:20:53.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.483 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:53.483 NVMe0n1 : 1.00 30167.47 117.84 0.00 0.00 4232.51 2007.04 7591.25 00:20:53.483 =================================================================================================================== 00:20:53.483 Total : 30167.47 117.84 0.00 0.00 4232.51 2007.04 7591.25 00:20:53.483 Received shutdown signal, test time was about 1.000000 seconds 00:20:53.483 00:20:53.483 Latency(us) 00:20:53.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.483 =================================================================================================================== 00:20:53.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.483 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:53.483 00:05:23 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:53.483 00:05:23 -- common/autotest_common.sh@1598 -- # read -r file 00:20:53.483 00:05:23 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:53.483 00:05:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:53.483 00:05:23 -- nvmf/common.sh@117 -- # sync 00:20:53.483 00:05:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:53.483 00:05:23 -- nvmf/common.sh@120 -- # set +e 00:20:53.483 00:05:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:53.483 00:05:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:53.484 rmmod nvme_tcp 00:20:53.484 rmmod nvme_fabrics 00:20:53.484 rmmod nvme_keyring 00:20:53.484 00:05:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:53.484 00:05:23 -- nvmf/common.sh@124 -- # set -e 00:20:53.484 00:05:23 -- nvmf/common.sh@125 -- # return 0 00:20:53.484 00:05:23 -- nvmf/common.sh@478 -- # '[' -n 455230 ']' 00:20:53.484 00:05:23 -- nvmf/common.sh@479 -- # killprocess 455230 00:20:53.484 00:05:23 -- common/autotest_common.sh@936 -- # '[' -z 455230 ']' 00:20:53.484 00:05:23 -- common/autotest_common.sh@940 -- # kill -0 455230 00:20:53.484 00:05:23 -- common/autotest_common.sh@941 -- # uname 00:20:53.484 00:05:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:53.484 00:05:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 455230 00:20:53.745 00:05:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:53.745 00:05:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:53.745 00:05:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 455230' 00:20:53.745 killing process with pid 455230 00:20:53.745 00:05:23 -- common/autotest_common.sh@955 -- # kill 455230 00:20:53.745 00:05:23 -- common/autotest_common.sh@960 -- # wait 455230 00:20:53.745 00:05:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:53.745 00:05:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:53.745 00:05:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:53.745 00:05:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:53.745 00:05:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:53.745 00:05:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.745 00:05:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.745 00:05:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.290 00:05:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:56.290 00:20:56.290 real 0m13.584s 00:20:56.290 user 0m16.973s 00:20:56.290 sys 0m5.981s 00:20:56.290 00:05:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:56.290 00:05:25 -- common/autotest_common.sh@10 -- # set +x 00:20:56.290 ************************************ 00:20:56.290 END TEST nvmf_multicontroller 00:20:56.290 ************************************ 00:20:56.290 00:05:25 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:56.290 00:05:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:56.290 00:05:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:56.290 00:05:25 -- common/autotest_common.sh@10 -- # set +x 00:20:56.290 ************************************ 00:20:56.290 START TEST nvmf_aer 00:20:56.290 ************************************ 00:20:56.290 00:05:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:56.290 * Looking for test storage... 00:20:56.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:56.290 00:05:26 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.290 00:05:26 -- nvmf/common.sh@7 -- # uname -s 00:20:56.290 00:05:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.290 00:05:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.290 00:05:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.290 00:05:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.290 00:05:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.290 00:05:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.290 00:05:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.290 00:05:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.290 00:05:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.290 00:05:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.290 00:05:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:56.290 00:05:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:56.290 00:05:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.290 00:05:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.290 00:05:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.290 00:05:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.290 00:05:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.290 00:05:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.290 00:05:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.290 00:05:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.290 00:05:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.290 00:05:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.290 00:05:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.290 00:05:26 -- paths/export.sh@5 -- # export PATH 00:20:56.290 00:05:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.290 00:05:26 -- nvmf/common.sh@47 -- # : 0 00:20:56.290 00:05:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:56.290 00:05:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:56.290 00:05:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.290 00:05:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.290 00:05:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.290 00:05:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:56.290 00:05:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:56.291 00:05:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:56.291 00:05:26 -- host/aer.sh@11 -- # nvmftestinit 00:20:56.291 00:05:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:56.291 00:05:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.291 00:05:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:56.291 00:05:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:56.291 00:05:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:56.291 00:05:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.291 00:05:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.291 00:05:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.291 00:05:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:56.291 00:05:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:56.291 00:05:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:56.291 00:05:26 -- common/autotest_common.sh@10 -- # set +x 00:21:02.889 00:05:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:02.890 00:05:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.890 00:05:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.890 00:05:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.890 00:05:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.890 00:05:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.890 00:05:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.890 00:05:33 -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.890 00:05:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.890 00:05:33 -- nvmf/common.sh@296 -- # e810=() 00:21:02.890 00:05:33 -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.890 00:05:33 -- nvmf/common.sh@297 -- # x722=() 00:21:02.890 00:05:33 -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.890 00:05:33 -- nvmf/common.sh@298 -- # mlx=() 00:21:02.890 00:05:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.890 00:05:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.890 00:05:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.890 00:05:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.890 00:05:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.890 00:05:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.890 00:05:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.890 00:05:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.890 00:05:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.890 00:05:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.890 00:05:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.890 00:05:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.890 00:05:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.890 00:05:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.890 00:05:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.890 00:05:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.890 00:05:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.890 00:05:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.890 00:05:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.890 00:05:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:02.890 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:02.890 00:05:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.890 00:05:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.890 00:05:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.890 00:05:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.890 00:05:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.890 00:05:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.154 00:05:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:03.154 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:03.154 00:05:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.154 00:05:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.154 00:05:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.154 00:05:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.154 00:05:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.154 00:05:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:03.154 00:05:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:03.154 00:05:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:03.154 00:05:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.154 00:05:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.154 00:05:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:03.154 00:05:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.154 00:05:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:03.154 Found net devices under 0000:31:00.0: cvl_0_0 00:21:03.154 00:05:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.154 00:05:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.154 00:05:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.154 00:05:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:03.154 00:05:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.154 00:05:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:03.154 Found net devices under 0000:31:00.1: cvl_0_1 00:21:03.154 00:05:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.154 00:05:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:03.154 00:05:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:03.154 00:05:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:03.154 00:05:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:03.154 00:05:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:03.154 00:05:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.154 00:05:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.154 00:05:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.154 00:05:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:03.154 00:05:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.154 00:05:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.154 00:05:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:03.154 00:05:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.154 00:05:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.154 00:05:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:03.154 00:05:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:03.154 00:05:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.154 00:05:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.154 00:05:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.154 00:05:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.154 00:05:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:03.154 00:05:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.415 00:05:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.415 00:05:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.415 00:05:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:03.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:21:03.415 00:21:03.415 --- 10.0.0.2 ping statistics --- 00:21:03.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.415 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:21:03.415 00:05:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:21:03.415 00:21:03.415 --- 10.0.0.1 ping statistics --- 00:21:03.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.415 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:21:03.415 00:05:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.415 00:05:33 -- nvmf/common.sh@411 -- # return 0 00:21:03.415 00:05:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:03.415 00:05:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.415 00:05:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:03.415 00:05:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:03.415 00:05:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.415 00:05:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:03.415 00:05:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:03.415 00:05:33 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:03.415 00:05:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:03.415 00:05:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:03.415 00:05:33 -- common/autotest_common.sh@10 -- # set +x 00:21:03.415 00:05:33 -- nvmf/common.sh@470 -- # nvmfpid=460321 00:21:03.415 00:05:33 -- nvmf/common.sh@471 -- # waitforlisten 460321 00:21:03.415 00:05:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:03.415 00:05:33 -- common/autotest_common.sh@817 -- # '[' -z 460321 ']' 00:21:03.415 00:05:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.415 00:05:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:03.415 00:05:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.415 00:05:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:03.415 00:05:33 -- common/autotest_common.sh@10 -- # set +x 00:21:03.415 [2024-04-27 00:05:33.532345] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:21:03.415 [2024-04-27 00:05:33.532410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.415 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.415 [2024-04-27 00:05:33.603269] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.675 [2024-04-27 00:05:33.677190] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.675 [2024-04-27 00:05:33.677232] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.675 [2024-04-27 00:05:33.677239] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.675 [2024-04-27 00:05:33.677246] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.675 [2024-04-27 00:05:33.677252] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.675 [2024-04-27 00:05:33.677364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.675 [2024-04-27 00:05:33.677487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.675 [2024-04-27 00:05:33.677641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.675 [2024-04-27 00:05:33.677642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.245 00:05:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:04.245 00:05:34 -- common/autotest_common.sh@850 -- # return 0 00:21:04.245 00:05:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:04.245 00:05:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:04.246 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.246 00:05:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.246 00:05:34 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:04.246 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.246 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.246 [2024-04-27 00:05:34.360371] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.246 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.246 00:05:34 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:04.246 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.246 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.246 Malloc0 00:21:04.246 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.246 00:05:34 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:04.246 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.246 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.246 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.246 00:05:34 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:04.246 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.246 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.246 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.246 00:05:34 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.246 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.246 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.246 [2024-04-27 00:05:34.419777] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.246 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.246 00:05:34 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:04.246 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.246 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.246 [2024-04-27 00:05:34.431552] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:04.246 [ 00:21:04.246 { 00:21:04.246 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:04.246 "subtype": "Discovery", 00:21:04.246 "listen_addresses": [], 00:21:04.246 "allow_any_host": true, 00:21:04.246 "hosts": [] 00:21:04.246 }, 00:21:04.246 { 00:21:04.246 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.246 "subtype": "NVMe", 00:21:04.246 "listen_addresses": [ 00:21:04.246 { 00:21:04.246 "transport": "TCP", 00:21:04.246 "trtype": "TCP", 00:21:04.246 "adrfam": "IPv4", 00:21:04.246 "traddr": "10.0.0.2", 00:21:04.246 "trsvcid": "4420" 00:21:04.246 } 00:21:04.246 ], 00:21:04.246 "allow_any_host": true, 00:21:04.246 "hosts": [], 00:21:04.246 "serial_number": "SPDK00000000000001", 00:21:04.246 "model_number": "SPDK bdev Controller", 00:21:04.246 "max_namespaces": 2, 00:21:04.246 "min_cntlid": 1, 00:21:04.246 "max_cntlid": 65519, 00:21:04.246 "namespaces": [ 00:21:04.246 { 00:21:04.246 "nsid": 1, 00:21:04.246 "bdev_name": "Malloc0", 00:21:04.246 "name": "Malloc0", 00:21:04.246 "nguid": "B2067B4B2E3447D99E2E186007D78315", 00:21:04.246 "uuid": "b2067b4b-2e34-47d9-9e2e-186007d78315" 00:21:04.246 } 00:21:04.246 ] 00:21:04.246 } 00:21:04.246 ] 00:21:04.246 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.246 00:05:34 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:04.246 00:05:34 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:04.246 00:05:34 -- host/aer.sh@33 -- # aerpid=460669 00:21:04.246 00:05:34 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:04.246 00:05:34 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:04.246 00:05:34 -- common/autotest_common.sh@1251 -- # local i=0 00:21:04.246 00:05:34 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:04.246 00:05:34 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:21:04.246 00:05:34 -- common/autotest_common.sh@1254 -- # i=1 00:21:04.246 00:05:34 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:04.507 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.507 00:05:34 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:04.507 00:05:34 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:21:04.507 00:05:34 -- common/autotest_common.sh@1254 -- # i=2 00:21:04.507 00:05:34 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:04.507 00:05:34 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:04.507 00:05:34 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:04.507 00:05:34 -- common/autotest_common.sh@1262 -- # return 0 00:21:04.507 00:05:34 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:04.507 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.507 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.507 Malloc1 00:21:04.507 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.507 00:05:34 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:04.507 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.507 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.507 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.507 00:05:34 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:04.507 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.507 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.507 Asynchronous Event Request test 00:21:04.507 Attaching to 10.0.0.2 00:21:04.507 Attached to 10.0.0.2 00:21:04.507 Registering asynchronous event callbacks... 00:21:04.507 Starting namespace attribute notice tests for all controllers... 00:21:04.507 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:04.507 aer_cb - Changed Namespace 00:21:04.507 Cleaning up... 00:21:04.507 [ 00:21:04.507 { 00:21:04.507 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:04.507 "subtype": "Discovery", 00:21:04.507 "listen_addresses": [], 00:21:04.507 "allow_any_host": true, 00:21:04.507 "hosts": [] 00:21:04.507 }, 00:21:04.507 { 00:21:04.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.507 "subtype": "NVMe", 00:21:04.507 "listen_addresses": [ 00:21:04.507 { 00:21:04.507 "transport": "TCP", 00:21:04.507 "trtype": "TCP", 00:21:04.507 "adrfam": "IPv4", 00:21:04.507 "traddr": "10.0.0.2", 00:21:04.507 "trsvcid": "4420" 00:21:04.507 } 00:21:04.507 ], 00:21:04.507 "allow_any_host": true, 00:21:04.507 "hosts": [], 00:21:04.507 "serial_number": "SPDK00000000000001", 00:21:04.507 "model_number": "SPDK bdev Controller", 00:21:04.507 "max_namespaces": 2, 00:21:04.507 "min_cntlid": 1, 00:21:04.507 "max_cntlid": 65519, 00:21:04.507 "namespaces": [ 00:21:04.507 { 00:21:04.507 "nsid": 1, 00:21:04.507 "bdev_name": "Malloc0", 00:21:04.507 "name": "Malloc0", 00:21:04.507 "nguid": "B2067B4B2E3447D99E2E186007D78315", 00:21:04.507 "uuid": "b2067b4b-2e34-47d9-9e2e-186007d78315" 00:21:04.507 }, 00:21:04.507 { 00:21:04.507 "nsid": 2, 00:21:04.507 "bdev_name": "Malloc1", 00:21:04.507 "name": "Malloc1", 00:21:04.507 "nguid": "8DA87F44A9124639AE0F904F59DB3A00", 00:21:04.507 "uuid": "8da87f44-a912-4639-ae0f-904f59db3a00" 00:21:04.507 } 00:21:04.507 ] 00:21:04.507 } 00:21:04.507 ] 00:21:04.507 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.507 00:05:34 -- host/aer.sh@43 -- # wait 460669 00:21:04.507 00:05:34 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:04.507 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.507 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.767 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.767 00:05:34 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:04.767 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.767 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.767 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.767 00:05:34 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.767 00:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.767 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.767 00:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.767 00:05:34 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:04.767 00:05:34 -- host/aer.sh@51 -- # nvmftestfini 00:21:04.767 00:05:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:04.767 00:05:34 -- nvmf/common.sh@117 -- # sync 00:21:04.767 00:05:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.767 00:05:34 -- nvmf/common.sh@120 -- # set +e 00:21:04.767 00:05:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.768 00:05:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.768 rmmod nvme_tcp 00:21:04.768 rmmod nvme_fabrics 00:21:04.768 rmmod nvme_keyring 00:21:04.768 00:05:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.768 00:05:34 -- nvmf/common.sh@124 -- # set -e 00:21:04.768 00:05:34 -- nvmf/common.sh@125 -- # return 0 00:21:04.768 00:05:34 -- nvmf/common.sh@478 -- # '[' -n 460321 ']' 00:21:04.768 00:05:34 -- nvmf/common.sh@479 -- # killprocess 460321 00:21:04.768 00:05:34 -- common/autotest_common.sh@936 -- # '[' -z 460321 ']' 00:21:04.768 00:05:34 -- common/autotest_common.sh@940 -- # kill -0 460321 00:21:04.768 00:05:34 -- common/autotest_common.sh@941 -- # uname 00:21:04.768 00:05:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.768 00:05:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 460321 00:21:04.768 00:05:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:04.768 00:05:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:04.768 00:05:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 460321' 00:21:04.768 killing process with pid 460321 00:21:04.768 00:05:34 -- common/autotest_common.sh@955 -- # kill 460321 00:21:04.768 [2024-04-27 00:05:34.898202] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:04.768 00:05:34 -- common/autotest_common.sh@960 -- # wait 460321 00:21:05.027 00:05:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:05.027 00:05:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:05.027 00:05:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:05.027 00:05:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.027 00:05:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.027 00:05:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.027 00:05:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.027 00:05:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.938 00:05:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:06.938 00:21:06.938 real 0m10.941s 00:21:06.938 user 0m7.599s 00:21:06.938 sys 0m5.613s 00:21:06.938 00:05:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:06.938 00:05:37 -- common/autotest_common.sh@10 -- # set +x 00:21:06.938 ************************************ 00:21:06.939 END TEST nvmf_aer 00:21:06.939 ************************************ 00:21:06.939 00:05:37 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:06.939 00:05:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:06.939 00:05:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:06.939 00:05:37 -- common/autotest_common.sh@10 -- # set +x 00:21:07.198 ************************************ 00:21:07.198 START TEST nvmf_async_init 00:21:07.198 ************************************ 00:21:07.198 00:05:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:07.198 * Looking for test storage... 00:21:07.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:07.198 00:05:37 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.198 00:05:37 -- nvmf/common.sh@7 -- # uname -s 00:21:07.459 00:05:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.459 00:05:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.459 00:05:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.459 00:05:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.459 00:05:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.459 00:05:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.459 00:05:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.459 00:05:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.459 00:05:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.459 00:05:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.459 00:05:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.459 00:05:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.459 00:05:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.459 00:05:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.459 00:05:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.459 00:05:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.459 00:05:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.459 00:05:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.459 00:05:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.459 00:05:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.460 00:05:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.460 00:05:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.460 00:05:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.460 00:05:37 -- paths/export.sh@5 -- # export PATH 00:21:07.460 00:05:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.460 00:05:37 -- nvmf/common.sh@47 -- # : 0 00:21:07.460 00:05:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.460 00:05:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.460 00:05:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.460 00:05:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.460 00:05:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.460 00:05:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.460 00:05:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.460 00:05:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.460 00:05:37 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:07.460 00:05:37 -- host/async_init.sh@14 -- # null_block_size=512 00:21:07.460 00:05:37 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:07.460 00:05:37 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:07.460 00:05:37 -- host/async_init.sh@20 -- # tr -d - 00:21:07.460 00:05:37 -- host/async_init.sh@20 -- # uuidgen 00:21:07.460 00:05:37 -- host/async_init.sh@20 -- # nguid=a5667e00919a498cb5862ae0875d81f6 00:21:07.460 00:05:37 -- host/async_init.sh@22 -- # nvmftestinit 00:21:07.460 00:05:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:07.460 00:05:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.460 00:05:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:07.460 00:05:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:07.460 00:05:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:07.460 00:05:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.460 00:05:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.460 00:05:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.460 00:05:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:07.460 00:05:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:07.460 00:05:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.460 00:05:37 -- common/autotest_common.sh@10 -- # set +x 00:21:14.045 00:05:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:14.045 00:05:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:14.045 00:05:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:14.045 00:05:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:14.045 00:05:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:14.045 00:05:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:14.045 00:05:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:14.045 00:05:44 -- nvmf/common.sh@295 -- # net_devs=() 00:21:14.045 00:05:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:14.045 00:05:44 -- nvmf/common.sh@296 -- # e810=() 00:21:14.045 00:05:44 -- nvmf/common.sh@296 -- # local -ga e810 00:21:14.045 00:05:44 -- nvmf/common.sh@297 -- # x722=() 00:21:14.045 00:05:44 -- nvmf/common.sh@297 -- # local -ga x722 00:21:14.045 00:05:44 -- nvmf/common.sh@298 -- # mlx=() 00:21:14.045 00:05:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:14.045 00:05:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.045 00:05:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.045 00:05:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.045 00:05:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.045 00:05:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.045 00:05:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.045 00:05:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.045 00:05:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.045 00:05:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.045 00:05:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.045 00:05:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.045 00:05:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:14.045 00:05:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:14.045 00:05:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:14.046 00:05:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:14.046 00:05:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.046 00:05:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:14.046 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:14.046 00:05:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.046 00:05:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:14.046 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:14.046 00:05:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:14.046 00:05:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.046 00:05:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.046 00:05:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:14.046 00:05:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.046 00:05:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:14.046 Found net devices under 0000:31:00.0: cvl_0_0 00:21:14.046 00:05:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.046 00:05:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.046 00:05:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.046 00:05:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:14.046 00:05:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.046 00:05:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:14.046 Found net devices under 0000:31:00.1: cvl_0_1 00:21:14.046 00:05:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.046 00:05:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:14.046 00:05:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:14.046 00:05:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:14.046 00:05:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:14.046 00:05:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.046 00:05:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.046 00:05:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.046 00:05:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:14.046 00:05:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.046 00:05:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.306 00:05:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:14.306 00:05:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.306 00:05:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.307 00:05:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:14.307 00:05:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:14.307 00:05:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.307 00:05:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.307 00:05:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.307 00:05:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.307 00:05:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:14.307 00:05:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.571 00:05:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.571 00:05:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.571 00:05:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:14.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:21:14.571 00:21:14.571 --- 10.0.0.2 ping statistics --- 00:21:14.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.571 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:21:14.571 00:05:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:21:14.571 00:21:14.571 --- 10.0.0.1 ping statistics --- 00:21:14.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.571 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:21:14.571 00:05:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.571 00:05:44 -- nvmf/common.sh@411 -- # return 0 00:21:14.571 00:05:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:14.571 00:05:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.571 00:05:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:14.571 00:05:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:14.571 00:05:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.571 00:05:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:14.571 00:05:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:14.571 00:05:44 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:14.571 00:05:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:14.571 00:05:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:14.571 00:05:44 -- common/autotest_common.sh@10 -- # set +x 00:21:14.571 00:05:44 -- nvmf/common.sh@470 -- # nvmfpid=464983 00:21:14.571 00:05:44 -- nvmf/common.sh@471 -- # waitforlisten 464983 00:21:14.571 00:05:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:14.571 00:05:44 -- common/autotest_common.sh@817 -- # '[' -z 464983 ']' 00:21:14.571 00:05:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.571 00:05:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:14.571 00:05:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.571 00:05:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:14.571 00:05:44 -- common/autotest_common.sh@10 -- # set +x 00:21:14.571 [2024-04-27 00:05:44.683484] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:21:14.571 [2024-04-27 00:05:44.683552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.571 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.571 [2024-04-27 00:05:44.754320] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.859 [2024-04-27 00:05:44.827403] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.859 [2024-04-27 00:05:44.827442] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.859 [2024-04-27 00:05:44.827451] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.859 [2024-04-27 00:05:44.827458] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.859 [2024-04-27 00:05:44.827464] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.859 [2024-04-27 00:05:44.827487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.473 00:05:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:15.473 00:05:45 -- common/autotest_common.sh@850 -- # return 0 00:21:15.473 00:05:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:15.473 00:05:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:15.473 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.473 00:05:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.473 00:05:45 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:15.473 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.473 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.473 [2024-04-27 00:05:45.490290] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.473 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.473 00:05:45 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:15.473 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.473 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.473 null0 00:21:15.473 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.473 00:05:45 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:15.473 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.473 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.473 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.473 00:05:45 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:15.473 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.473 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.473 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.473 00:05:45 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a5667e00919a498cb5862ae0875d81f6 00:21:15.473 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.473 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.473 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.473 00:05:45 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:15.473 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.473 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.473 [2024-04-27 00:05:45.546571] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.473 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.473 00:05:45 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:15.473 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.473 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.734 nvme0n1 00:21:15.734 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.734 00:05:45 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:15.734 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.734 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.734 [ 00:21:15.734 { 00:21:15.734 "name": "nvme0n1", 00:21:15.734 "aliases": [ 00:21:15.734 "a5667e00-919a-498c-b586-2ae0875d81f6" 00:21:15.734 ], 00:21:15.734 "product_name": "NVMe disk", 00:21:15.734 "block_size": 512, 00:21:15.734 "num_blocks": 2097152, 00:21:15.734 "uuid": "a5667e00-919a-498c-b586-2ae0875d81f6", 00:21:15.734 "assigned_rate_limits": { 00:21:15.734 "rw_ios_per_sec": 0, 00:21:15.734 "rw_mbytes_per_sec": 0, 00:21:15.734 "r_mbytes_per_sec": 0, 00:21:15.734 "w_mbytes_per_sec": 0 00:21:15.734 }, 00:21:15.734 "claimed": false, 00:21:15.734 "zoned": false, 00:21:15.734 "supported_io_types": { 00:21:15.734 "read": true, 00:21:15.734 "write": true, 00:21:15.734 "unmap": false, 00:21:15.734 "write_zeroes": true, 00:21:15.734 "flush": true, 00:21:15.734 "reset": true, 00:21:15.734 "compare": true, 00:21:15.734 "compare_and_write": true, 00:21:15.734 "abort": true, 00:21:15.734 "nvme_admin": true, 00:21:15.734 "nvme_io": true 00:21:15.734 }, 00:21:15.734 "memory_domains": [ 00:21:15.734 { 00:21:15.734 "dma_device_id": "system", 00:21:15.734 "dma_device_type": 1 00:21:15.734 } 00:21:15.734 ], 00:21:15.734 "driver_specific": { 00:21:15.734 "nvme": [ 00:21:15.734 { 00:21:15.734 "trid": { 00:21:15.734 "trtype": "TCP", 00:21:15.734 "adrfam": "IPv4", 00:21:15.734 "traddr": "10.0.0.2", 00:21:15.734 "trsvcid": "4420", 00:21:15.734 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:15.734 }, 00:21:15.734 "ctrlr_data": { 00:21:15.734 "cntlid": 1, 00:21:15.734 "vendor_id": "0x8086", 00:21:15.734 "model_number": "SPDK bdev Controller", 00:21:15.734 "serial_number": "00000000000000000000", 00:21:15.734 "firmware_revision": "24.05", 00:21:15.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:15.734 "oacs": { 00:21:15.734 "security": 0, 00:21:15.734 "format": 0, 00:21:15.734 "firmware": 0, 00:21:15.734 "ns_manage": 0 00:21:15.734 }, 00:21:15.734 "multi_ctrlr": true, 00:21:15.734 "ana_reporting": false 00:21:15.734 }, 00:21:15.734 "vs": { 00:21:15.734 "nvme_version": "1.3" 00:21:15.734 }, 00:21:15.734 "ns_data": { 00:21:15.734 "id": 1, 00:21:15.734 "can_share": true 00:21:15.734 } 00:21:15.734 } 00:21:15.734 ], 00:21:15.734 "mp_policy": "active_passive" 00:21:15.734 } 00:21:15.734 } 00:21:15.734 ] 00:21:15.734 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.734 00:05:45 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:15.734 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.734 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.734 [2024-04-27 00:05:45.811102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:15.734 [2024-04-27 00:05:45.811163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb58390 (9): Bad file descriptor 00:21:15.734 [2024-04-27 00:05:45.942934] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:15.734 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.734 00:05:45 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:15.734 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.734 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.734 [ 00:21:15.734 { 00:21:15.734 "name": "nvme0n1", 00:21:15.734 "aliases": [ 00:21:15.734 "a5667e00-919a-498c-b586-2ae0875d81f6" 00:21:15.734 ], 00:21:15.734 "product_name": "NVMe disk", 00:21:15.734 "block_size": 512, 00:21:15.734 "num_blocks": 2097152, 00:21:15.734 "uuid": "a5667e00-919a-498c-b586-2ae0875d81f6", 00:21:15.995 "assigned_rate_limits": { 00:21:15.995 "rw_ios_per_sec": 0, 00:21:15.995 "rw_mbytes_per_sec": 0, 00:21:15.995 "r_mbytes_per_sec": 0, 00:21:15.995 "w_mbytes_per_sec": 0 00:21:15.995 }, 00:21:15.995 "claimed": false, 00:21:15.995 "zoned": false, 00:21:15.995 "supported_io_types": { 00:21:15.995 "read": true, 00:21:15.995 "write": true, 00:21:15.995 "unmap": false, 00:21:15.995 "write_zeroes": true, 00:21:15.995 "flush": true, 00:21:15.995 "reset": true, 00:21:15.995 "compare": true, 00:21:15.995 "compare_and_write": true, 00:21:15.995 "abort": true, 00:21:15.995 "nvme_admin": true, 00:21:15.995 "nvme_io": true 00:21:15.995 }, 00:21:15.995 "memory_domains": [ 00:21:15.995 { 00:21:15.995 "dma_device_id": "system", 00:21:15.995 "dma_device_type": 1 00:21:15.995 } 00:21:15.995 ], 00:21:15.995 "driver_specific": { 00:21:15.995 "nvme": [ 00:21:15.995 { 00:21:15.995 "trid": { 00:21:15.995 "trtype": "TCP", 00:21:15.995 "adrfam": "IPv4", 00:21:15.995 "traddr": "10.0.0.2", 00:21:15.995 "trsvcid": "4420", 00:21:15.995 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:15.995 }, 00:21:15.995 "ctrlr_data": { 00:21:15.995 "cntlid": 2, 00:21:15.995 "vendor_id": "0x8086", 00:21:15.995 "model_number": "SPDK bdev Controller", 00:21:15.995 "serial_number": "00000000000000000000", 00:21:15.995 "firmware_revision": "24.05", 00:21:15.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:15.995 "oacs": { 00:21:15.995 "security": 0, 00:21:15.995 "format": 0, 00:21:15.995 "firmware": 0, 00:21:15.995 "ns_manage": 0 00:21:15.995 }, 00:21:15.995 "multi_ctrlr": true, 00:21:15.995 "ana_reporting": false 00:21:15.995 }, 00:21:15.995 "vs": { 00:21:15.995 "nvme_version": "1.3" 00:21:15.995 }, 00:21:15.995 "ns_data": { 00:21:15.995 "id": 1, 00:21:15.995 "can_share": true 00:21:15.995 } 00:21:15.995 } 00:21:15.995 ], 00:21:15.995 "mp_policy": "active_passive" 00:21:15.995 } 00:21:15.995 } 00:21:15.995 ] 00:21:15.995 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.995 00:05:45 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.995 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.995 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.995 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.995 00:05:45 -- host/async_init.sh@53 -- # mktemp 00:21:15.995 00:05:45 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5P3YvAyvIk 00:21:15.995 00:05:45 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:15.995 00:05:45 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5P3YvAyvIk 00:21:15.995 00:05:45 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:15.995 00:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.995 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.995 00:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.995 00:05:46 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:15.995 00:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.995 00:05:46 -- common/autotest_common.sh@10 -- # set +x 00:21:15.995 [2024-04-27 00:05:46.007722] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:15.995 [2024-04-27 00:05:46.007843] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:15.995 00:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.995 00:05:46 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5P3YvAyvIk 00:21:15.995 00:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.995 00:05:46 -- common/autotest_common.sh@10 -- # set +x 00:21:15.995 [2024-04-27 00:05:46.019748] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:15.995 00:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.995 00:05:46 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5P3YvAyvIk 00:21:15.995 00:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.995 00:05:46 -- common/autotest_common.sh@10 -- # set +x 00:21:15.995 [2024-04-27 00:05:46.031782] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.995 [2024-04-27 00:05:46.031820] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:15.995 nvme0n1 00:21:15.995 00:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.995 00:05:46 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:15.995 00:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.995 00:05:46 -- common/autotest_common.sh@10 -- # set +x 00:21:15.995 [ 00:21:15.995 { 00:21:15.995 "name": "nvme0n1", 00:21:15.995 "aliases": [ 00:21:15.995 "a5667e00-919a-498c-b586-2ae0875d81f6" 00:21:15.995 ], 00:21:15.995 "product_name": "NVMe disk", 00:21:15.995 "block_size": 512, 00:21:15.995 "num_blocks": 2097152, 00:21:15.995 "uuid": "a5667e00-919a-498c-b586-2ae0875d81f6", 00:21:15.995 "assigned_rate_limits": { 00:21:15.995 "rw_ios_per_sec": 0, 00:21:15.995 "rw_mbytes_per_sec": 0, 00:21:15.995 "r_mbytes_per_sec": 0, 00:21:15.995 "w_mbytes_per_sec": 0 00:21:15.995 }, 00:21:15.995 "claimed": false, 00:21:15.995 "zoned": false, 00:21:15.995 "supported_io_types": { 00:21:15.995 "read": true, 00:21:15.995 "write": true, 00:21:15.995 "unmap": false, 00:21:15.995 "write_zeroes": true, 00:21:15.995 "flush": true, 00:21:15.995 "reset": true, 00:21:15.995 "compare": true, 00:21:15.995 "compare_and_write": true, 00:21:15.995 "abort": true, 00:21:15.995 "nvme_admin": true, 00:21:15.995 "nvme_io": true 00:21:15.995 }, 00:21:15.995 "memory_domains": [ 00:21:15.995 { 00:21:15.995 "dma_device_id": "system", 00:21:15.995 "dma_device_type": 1 00:21:15.995 } 00:21:15.995 ], 00:21:15.995 "driver_specific": { 00:21:15.995 "nvme": [ 00:21:15.995 { 00:21:15.995 "trid": { 00:21:15.995 "trtype": "TCP", 00:21:15.995 "adrfam": "IPv4", 00:21:15.995 "traddr": "10.0.0.2", 00:21:15.995 "trsvcid": "4421", 00:21:15.995 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:15.995 }, 00:21:15.995 "ctrlr_data": { 00:21:15.995 "cntlid": 3, 00:21:15.995 "vendor_id": "0x8086", 00:21:15.996 "model_number": "SPDK bdev Controller", 00:21:15.996 "serial_number": "00000000000000000000", 00:21:15.996 "firmware_revision": "24.05", 00:21:15.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:15.996 "oacs": { 00:21:15.996 "security": 0, 00:21:15.996 "format": 0, 00:21:15.996 "firmware": 0, 00:21:15.996 "ns_manage": 0 00:21:15.996 }, 00:21:15.996 "multi_ctrlr": true, 00:21:15.996 "ana_reporting": false 00:21:15.996 }, 00:21:15.996 "vs": { 00:21:15.996 "nvme_version": "1.3" 00:21:15.996 }, 00:21:15.996 "ns_data": { 00:21:15.996 "id": 1, 00:21:15.996 "can_share": true 00:21:15.996 } 00:21:15.996 } 00:21:15.996 ], 00:21:15.996 "mp_policy": "active_passive" 00:21:15.996 } 00:21:15.996 } 00:21:15.996 ] 00:21:15.996 00:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.996 00:05:46 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.996 00:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.996 00:05:46 -- common/autotest_common.sh@10 -- # set +x 00:21:15.996 00:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.996 00:05:46 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.5P3YvAyvIk 00:21:15.996 00:05:46 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:15.996 00:05:46 -- host/async_init.sh@78 -- # nvmftestfini 00:21:15.996 00:05:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:15.996 00:05:46 -- nvmf/common.sh@117 -- # sync 00:21:15.996 00:05:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:15.996 00:05:46 -- nvmf/common.sh@120 -- # set +e 00:21:15.996 00:05:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:15.996 00:05:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:15.996 rmmod nvme_tcp 00:21:15.996 rmmod nvme_fabrics 00:21:15.996 rmmod nvme_keyring 00:21:16.257 00:05:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:16.257 00:05:46 -- nvmf/common.sh@124 -- # set -e 00:21:16.257 00:05:46 -- nvmf/common.sh@125 -- # return 0 00:21:16.257 00:05:46 -- nvmf/common.sh@478 -- # '[' -n 464983 ']' 00:21:16.257 00:05:46 -- nvmf/common.sh@479 -- # killprocess 464983 00:21:16.257 00:05:46 -- common/autotest_common.sh@936 -- # '[' -z 464983 ']' 00:21:16.257 00:05:46 -- common/autotest_common.sh@940 -- # kill -0 464983 00:21:16.257 00:05:46 -- common/autotest_common.sh@941 -- # uname 00:21:16.257 00:05:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:16.257 00:05:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 464983 00:21:16.257 00:05:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:16.257 00:05:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:16.257 00:05:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 464983' 00:21:16.257 killing process with pid 464983 00:21:16.257 00:05:46 -- common/autotest_common.sh@955 -- # kill 464983 00:21:16.257 [2024-04-27 00:05:46.289062] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:16.257 [2024-04-27 00:05:46.289090] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:16.257 00:05:46 -- common/autotest_common.sh@960 -- # wait 464983 00:21:16.257 00:05:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:16.257 00:05:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:16.257 00:05:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:16.257 00:05:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:16.258 00:05:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:16.258 00:05:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.258 00:05:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.258 00:05:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.803 00:05:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:18.803 00:21:18.803 real 0m11.174s 00:21:18.803 user 0m3.898s 00:21:18.803 sys 0m5.719s 00:21:18.803 00:05:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:18.803 00:05:48 -- common/autotest_common.sh@10 -- # set +x 00:21:18.803 ************************************ 00:21:18.803 END TEST nvmf_async_init 00:21:18.803 ************************************ 00:21:18.803 00:05:48 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:18.803 00:05:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:18.803 00:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:18.803 00:05:48 -- common/autotest_common.sh@10 -- # set +x 00:21:18.803 ************************************ 00:21:18.803 START TEST dma 00:21:18.803 ************************************ 00:21:18.803 00:05:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:18.803 * Looking for test storage... 00:21:18.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:18.803 00:05:48 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.803 00:05:48 -- nvmf/common.sh@7 -- # uname -s 00:21:18.803 00:05:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.803 00:05:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.803 00:05:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.803 00:05:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.803 00:05:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.803 00:05:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.803 00:05:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.803 00:05:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.803 00:05:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.803 00:05:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.803 00:05:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:18.803 00:05:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:18.803 00:05:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.803 00:05:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.803 00:05:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:18.803 00:05:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:18.803 00:05:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:18.803 00:05:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.803 00:05:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.803 00:05:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.803 00:05:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.803 00:05:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.803 00:05:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.803 00:05:48 -- paths/export.sh@5 -- # export PATH 00:21:18.803 00:05:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.803 00:05:48 -- nvmf/common.sh@47 -- # : 0 00:21:18.803 00:05:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:18.803 00:05:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:18.803 00:05:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:18.803 00:05:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.803 00:05:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.803 00:05:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:18.803 00:05:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:18.803 00:05:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:18.803 00:05:48 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:18.803 00:05:48 -- host/dma.sh@13 -- # exit 0 00:21:18.803 00:21:18.803 real 0m0.128s 00:21:18.803 user 0m0.061s 00:21:18.803 sys 0m0.074s 00:21:18.803 00:05:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:18.803 00:05:48 -- common/autotest_common.sh@10 -- # set +x 00:21:18.803 ************************************ 00:21:18.803 END TEST dma 00:21:18.803 ************************************ 00:21:18.803 00:05:48 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:18.803 00:05:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:18.803 00:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:18.803 00:05:48 -- common/autotest_common.sh@10 -- # set +x 00:21:18.803 ************************************ 00:21:18.803 START TEST nvmf_identify 00:21:18.803 ************************************ 00:21:18.803 00:05:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:19.063 * Looking for test storage... 00:21:19.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:19.063 00:05:49 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.063 00:05:49 -- nvmf/common.sh@7 -- # uname -s 00:21:19.063 00:05:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.063 00:05:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.063 00:05:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.063 00:05:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.063 00:05:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.063 00:05:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.064 00:05:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.064 00:05:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.064 00:05:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.064 00:05:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.064 00:05:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:19.064 00:05:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:19.064 00:05:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.064 00:05:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.064 00:05:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.064 00:05:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.064 00:05:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.064 00:05:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.064 00:05:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.064 00:05:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.064 00:05:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.064 00:05:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.064 00:05:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.064 00:05:49 -- paths/export.sh@5 -- # export PATH 00:21:19.064 00:05:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.064 00:05:49 -- nvmf/common.sh@47 -- # : 0 00:21:19.064 00:05:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.064 00:05:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.064 00:05:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.064 00:05:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.064 00:05:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.064 00:05:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.064 00:05:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.064 00:05:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.064 00:05:49 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:19.064 00:05:49 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:19.064 00:05:49 -- host/identify.sh@14 -- # nvmftestinit 00:21:19.064 00:05:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:19.064 00:05:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.064 00:05:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:19.064 00:05:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:19.064 00:05:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:19.064 00:05:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.064 00:05:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.064 00:05:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.064 00:05:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:19.064 00:05:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:19.064 00:05:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:19.064 00:05:49 -- common/autotest_common.sh@10 -- # set +x 00:21:27.223 00:05:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:27.223 00:05:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.223 00:05:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.223 00:05:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.223 00:05:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.223 00:05:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.223 00:05:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.223 00:05:55 -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.223 00:05:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.223 00:05:55 -- nvmf/common.sh@296 -- # e810=() 00:21:27.223 00:05:55 -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.223 00:05:55 -- nvmf/common.sh@297 -- # x722=() 00:21:27.223 00:05:55 -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.223 00:05:55 -- nvmf/common.sh@298 -- # mlx=() 00:21:27.223 00:05:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.223 00:05:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.223 00:05:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.223 00:05:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.223 00:05:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.223 00:05:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.223 00:05:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.223 00:05:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.223 00:05:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.223 00:05:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.223 00:05:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.223 00:05:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.223 00:05:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.223 00:05:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.223 00:05:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.223 00:05:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.223 00:05:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:27.223 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:27.223 00:05:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.223 00:05:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:27.223 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:27.223 00:05:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.223 00:05:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.223 00:05:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.223 00:05:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:27.223 00:05:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.223 00:05:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:27.223 Found net devices under 0000:31:00.0: cvl_0_0 00:21:27.223 00:05:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.223 00:05:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.223 00:05:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.223 00:05:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:27.223 00:05:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.223 00:05:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:27.223 Found net devices under 0000:31:00.1: cvl_0_1 00:21:27.223 00:05:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.223 00:05:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:27.223 00:05:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:27.223 00:05:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:27.223 00:05:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:27.223 00:05:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.223 00:05:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.223 00:05:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.223 00:05:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.223 00:05:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.223 00:05:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.223 00:05:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.223 00:05:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.223 00:05:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.223 00:05:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.223 00:05:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.223 00:05:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.223 00:05:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.223 00:05:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.223 00:05:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.223 00:05:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.223 00:05:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.223 00:05:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.223 00:05:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.223 00:05:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:21:27.223 00:21:27.223 --- 10.0.0.2 ping statistics --- 00:21:27.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.223 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:21:27.223 00:05:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:21:27.223 00:21:27.223 --- 10.0.0.1 ping statistics --- 00:21:27.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.223 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:21:27.223 00:05:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.223 00:05:56 -- nvmf/common.sh@411 -- # return 0 00:21:27.223 00:05:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:27.223 00:05:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.223 00:05:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:27.223 00:05:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:27.223 00:05:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.223 00:05:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:27.223 00:05:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:27.223 00:05:56 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:27.223 00:05:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:27.223 00:05:56 -- common/autotest_common.sh@10 -- # set +x 00:21:27.223 00:05:56 -- host/identify.sh@19 -- # nvmfpid=469540 00:21:27.223 00:05:56 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:27.223 00:05:56 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:27.223 00:05:56 -- host/identify.sh@23 -- # waitforlisten 469540 00:21:27.223 00:05:56 -- common/autotest_common.sh@817 -- # '[' -z 469540 ']' 00:21:27.223 00:05:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.223 00:05:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:27.223 00:05:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.223 00:05:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:27.223 00:05:56 -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 [2024-04-27 00:05:56.401543] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:21:27.224 [2024-04-27 00:05:56.401629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.224 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.224 [2024-04-27 00:05:56.474746] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.224 [2024-04-27 00:05:56.554779] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.224 [2024-04-27 00:05:56.554821] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.224 [2024-04-27 00:05:56.554829] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.224 [2024-04-27 00:05:56.554835] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.224 [2024-04-27 00:05:56.554847] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.224 [2024-04-27 00:05:56.554944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.224 [2024-04-27 00:05:56.555255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.224 [2024-04-27 00:05:56.555378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.224 [2024-04-27 00:05:56.555378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.224 00:05:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:27.224 00:05:57 -- common/autotest_common.sh@850 -- # return 0 00:21:27.224 00:05:57 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.224 00:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.224 00:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 [2024-04-27 00:05:57.165142] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.224 00:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.224 00:05:57 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:27.224 00:05:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:27.224 00:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 00:05:57 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:27.224 00:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.224 00:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 Malloc0 00:21:27.224 00:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.224 00:05:57 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.224 00:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.224 00:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 00:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.224 00:05:57 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:27.224 00:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.224 00:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 00:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.224 00:05:57 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.224 00:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.224 00:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 [2024-04-27 00:05:57.264605] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.224 00:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.224 00:05:57 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:27.224 00:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.224 00:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 00:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.224 00:05:57 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:27.224 00:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.224 00:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 [2024-04-27 00:05:57.288431] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:27.224 [ 00:21:27.224 { 00:21:27.224 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:27.224 "subtype": "Discovery", 00:21:27.224 "listen_addresses": [ 00:21:27.224 { 00:21:27.224 "transport": "TCP", 00:21:27.224 "trtype": "TCP", 00:21:27.224 "adrfam": "IPv4", 00:21:27.224 "traddr": "10.0.0.2", 00:21:27.224 "trsvcid": "4420" 00:21:27.224 } 00:21:27.224 ], 00:21:27.224 "allow_any_host": true, 00:21:27.224 "hosts": [] 00:21:27.224 }, 00:21:27.224 { 00:21:27.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.224 "subtype": "NVMe", 00:21:27.224 "listen_addresses": [ 00:21:27.224 { 00:21:27.224 "transport": "TCP", 00:21:27.224 "trtype": "TCP", 00:21:27.224 "adrfam": "IPv4", 00:21:27.224 "traddr": "10.0.0.2", 00:21:27.224 "trsvcid": "4420" 00:21:27.224 } 00:21:27.224 ], 00:21:27.224 "allow_any_host": true, 00:21:27.224 "hosts": [], 00:21:27.224 "serial_number": "SPDK00000000000001", 00:21:27.224 "model_number": "SPDK bdev Controller", 00:21:27.224 "max_namespaces": 32, 00:21:27.224 "min_cntlid": 1, 00:21:27.224 "max_cntlid": 65519, 00:21:27.224 "namespaces": [ 00:21:27.224 { 00:21:27.224 "nsid": 1, 00:21:27.224 "bdev_name": "Malloc0", 00:21:27.224 "name": "Malloc0", 00:21:27.224 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:27.224 "eui64": "ABCDEF0123456789", 00:21:27.224 "uuid": "ad9a7973-3d96-43e4-8780-a3c339847310" 00:21:27.224 } 00:21:27.224 ] 00:21:27.224 } 00:21:27.224 ] 00:21:27.224 00:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.224 00:05:57 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:27.224 [2024-04-27 00:05:57.326558] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:21:27.224 [2024-04-27 00:05:57.326623] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469892 ] 00:21:27.224 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.224 [2024-04-27 00:05:57.360500] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:27.224 [2024-04-27 00:05:57.360551] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:27.224 [2024-04-27 00:05:57.360556] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:27.224 [2024-04-27 00:05:57.360568] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:27.224 [2024-04-27 00:05:57.360576] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:27.224 [2024-04-27 00:05:57.363872] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:27.224 [2024-04-27 00:05:57.363906] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x164bd10 0 00:21:27.224 [2024-04-27 00:05:57.371846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:27.224 [2024-04-27 00:05:57.371857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:27.224 [2024-04-27 00:05:57.371861] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:27.224 [2024-04-27 00:05:57.371865] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:27.224 [2024-04-27 00:05:57.371900] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.224 [2024-04-27 00:05:57.371906] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.224 [2024-04-27 00:05:57.371910] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164bd10) 00:21:27.224 [2024-04-27 00:05:57.371924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:27.224 [2024-04-27 00:05:57.371940] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3a60, cid 0, qid 0 00:21:27.224 [2024-04-27 00:05:57.379849] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.224 [2024-04-27 00:05:57.379858] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.224 [2024-04-27 00:05:57.379862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.224 [2024-04-27 00:05:57.379866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3a60) on tqpair=0x164bd10 00:21:27.224 [2024-04-27 00:05:57.379880] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:27.224 [2024-04-27 00:05:57.379887] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:27.224 [2024-04-27 00:05:57.379892] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:27.224 [2024-04-27 00:05:57.379906] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.224 [2024-04-27 00:05:57.379910] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.224 [2024-04-27 00:05:57.379913] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164bd10) 00:21:27.224 [2024-04-27 00:05:57.379920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.224 [2024-04-27 00:05:57.379933] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3a60, cid 0, qid 0 00:21:27.224 [2024-04-27 00:05:57.380136] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.224 [2024-04-27 00:05:57.380143] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.224 [2024-04-27 00:05:57.380146] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.224 [2024-04-27 00:05:57.380150] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3a60) on tqpair=0x164bd10 00:21:27.224 [2024-04-27 00:05:57.380156] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:27.224 [2024-04-27 00:05:57.380167] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:27.225 [2024-04-27 00:05:57.380174] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.380178] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.380181] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164bd10) 00:21:27.225 [2024-04-27 00:05:57.380188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.225 [2024-04-27 00:05:57.380198] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3a60, cid 0, qid 0 00:21:27.225 [2024-04-27 00:05:57.380417] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.225 [2024-04-27 00:05:57.380424] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.225 [2024-04-27 00:05:57.380427] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.380431] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3a60) on tqpair=0x164bd10 00:21:27.225 [2024-04-27 00:05:57.380437] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:27.225 [2024-04-27 00:05:57.380445] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:27.225 [2024-04-27 00:05:57.380452] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.380456] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.380459] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164bd10) 00:21:27.225 [2024-04-27 00:05:57.380466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.225 [2024-04-27 00:05:57.380476] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3a60, cid 0, qid 0 00:21:27.225 [2024-04-27 00:05:57.380676] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.225 [2024-04-27 00:05:57.380682] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.225 [2024-04-27 00:05:57.380686] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.380690] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3a60) on tqpair=0x164bd10 00:21:27.225 [2024-04-27 00:05:57.380695] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:27.225 [2024-04-27 00:05:57.380705] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.380708] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.380712] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164bd10) 00:21:27.225 [2024-04-27 00:05:57.380718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.225 [2024-04-27 00:05:57.380728] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3a60, cid 0, qid 0 00:21:27.225 [2024-04-27 00:05:57.380935] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.225 [2024-04-27 00:05:57.380942] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.225 [2024-04-27 00:05:57.380945] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.380949] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3a60) on tqpair=0x164bd10 00:21:27.225 [2024-04-27 00:05:57.380955] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:27.225 [2024-04-27 00:05:57.380959] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:27.225 [2024-04-27 00:05:57.380969] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:27.225 [2024-04-27 00:05:57.381074] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:27.225 [2024-04-27 00:05:57.381079] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:27.225 [2024-04-27 00:05:57.381088] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.381092] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.381095] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164bd10) 00:21:27.225 [2024-04-27 00:05:57.381102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.225 [2024-04-27 00:05:57.381112] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3a60, cid 0, qid 0 00:21:27.225 [2024-04-27 00:05:57.381304] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.225 [2024-04-27 00:05:57.381310] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.225 [2024-04-27 00:05:57.381314] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.381317] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3a60) on tqpair=0x164bd10 00:21:27.225 [2024-04-27 00:05:57.381323] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:27.225 [2024-04-27 00:05:57.381332] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.381336] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.381339] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164bd10) 00:21:27.225 [2024-04-27 00:05:57.381346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.225 [2024-04-27 00:05:57.381356] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3a60, cid 0, qid 0 00:21:27.225 [2024-04-27 00:05:57.381551] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.225 [2024-04-27 00:05:57.381557] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.225 [2024-04-27 00:05:57.381561] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.381564] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3a60) on tqpair=0x164bd10 00:21:27.225 [2024-04-27 00:05:57.381570] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:27.225 [2024-04-27 00:05:57.381574] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:27.225 [2024-04-27 00:05:57.381582] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:27.225 [2024-04-27 00:05:57.381596] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:27.225 [2024-04-27 00:05:57.381607] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.381610] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164bd10) 00:21:27.225 [2024-04-27 00:05:57.381617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.225 [2024-04-27 00:05:57.381627] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3a60, cid 0, qid 0 00:21:27.225 [2024-04-27 00:05:57.381867] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.225 [2024-04-27 00:05:57.381874] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.225 [2024-04-27 00:05:57.381881] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.381884] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x164bd10): datao=0, datal=4096, cccid=0 00:21:27.225 [2024-04-27 00:05:57.381889] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16b3a60) on tqpair(0x164bd10): expected_datao=0, payload_size=4096 00:21:27.225 [2024-04-27 00:05:57.381894] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.381938] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.381943] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.423023] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.225 [2024-04-27 00:05:57.423034] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.225 [2024-04-27 00:05:57.423037] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.423041] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3a60) on tqpair=0x164bd10 00:21:27.225 [2024-04-27 00:05:57.423050] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:27.225 [2024-04-27 00:05:57.423057] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:27.225 [2024-04-27 00:05:57.423062] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:27.225 [2024-04-27 00:05:57.423067] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:27.225 [2024-04-27 00:05:57.423072] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:27.225 [2024-04-27 00:05:57.423077] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:27.225 [2024-04-27 00:05:57.423086] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:27.225 [2024-04-27 00:05:57.423093] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.423097] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.423101] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164bd10) 00:21:27.225 [2024-04-27 00:05:57.423108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:27.225 [2024-04-27 00:05:57.423120] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3a60, cid 0, qid 0 00:21:27.225 [2024-04-27 00:05:57.423331] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.225 [2024-04-27 00:05:57.423337] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.225 [2024-04-27 00:05:57.423340] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.225 [2024-04-27 00:05:57.423344] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3a60) on tqpair=0x164bd10 00:21:27.225 [2024-04-27 00:05:57.423352] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.423356] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.423360] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164bd10) 00:21:27.226 [2024-04-27 00:05:57.423366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.226 [2024-04-27 00:05:57.423372] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.423375] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.423379] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x164bd10) 00:21:27.226 [2024-04-27 00:05:57.423385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.226 [2024-04-27 00:05:57.423393] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.423397] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.423400] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x164bd10) 00:21:27.226 [2024-04-27 00:05:57.423406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.226 [2024-04-27 00:05:57.423412] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.423416] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.423419] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.226 [2024-04-27 00:05:57.423425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.226 [2024-04-27 00:05:57.423429] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:27.226 [2024-04-27 00:05:57.423440] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:27.226 [2024-04-27 00:05:57.423446] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.423450] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x164bd10) 00:21:27.226 [2024-04-27 00:05:57.423457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.226 [2024-04-27 00:05:57.423468] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3a60, cid 0, qid 0 00:21:27.226 [2024-04-27 00:05:57.423473] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3bc0, cid 1, qid 0 00:21:27.226 [2024-04-27 00:05:57.423478] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3d20, cid 2, qid 0 00:21:27.226 [2024-04-27 00:05:57.423482] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.226 [2024-04-27 00:05:57.423487] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3fe0, cid 4, qid 0 00:21:27.226 [2024-04-27 00:05:57.423717] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.226 [2024-04-27 00:05:57.423724] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.226 [2024-04-27 00:05:57.423727] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.423731] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3fe0) on tqpair=0x164bd10 00:21:27.226 [2024-04-27 00:05:57.423736] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:27.226 [2024-04-27 00:05:57.423741] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:27.226 [2024-04-27 00:05:57.423752] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.423756] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x164bd10) 00:21:27.226 [2024-04-27 00:05:57.423762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.226 [2024-04-27 00:05:57.423772] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3fe0, cid 4, qid 0 00:21:27.226 [2024-04-27 00:05:57.427846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.226 [2024-04-27 00:05:57.427854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.226 [2024-04-27 00:05:57.427857] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.427861] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x164bd10): datao=0, datal=4096, cccid=4 00:21:27.226 [2024-04-27 00:05:57.427868] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16b3fe0) on tqpair(0x164bd10): expected_datao=0, payload_size=4096 00:21:27.226 [2024-04-27 00:05:57.427872] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.427879] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.427882] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.427888] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.226 [2024-04-27 00:05:57.427894] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.226 [2024-04-27 00:05:57.427897] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.427901] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3fe0) on tqpair=0x164bd10 00:21:27.226 [2024-04-27 00:05:57.427913] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:27.226 [2024-04-27 00:05:57.427932] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.427936] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x164bd10) 00:21:27.226 [2024-04-27 00:05:57.427943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.226 [2024-04-27 00:05:57.427950] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.427953] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.427956] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x164bd10) 00:21:27.226 [2024-04-27 00:05:57.427963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.226 [2024-04-27 00:05:57.427978] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3fe0, cid 4, qid 0 00:21:27.226 [2024-04-27 00:05:57.427983] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b4140, cid 5, qid 0 00:21:27.226 [2024-04-27 00:05:57.428239] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.226 [2024-04-27 00:05:57.428245] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.226 [2024-04-27 00:05:57.428249] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.428252] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x164bd10): datao=0, datal=1024, cccid=4 00:21:27.226 [2024-04-27 00:05:57.428257] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16b3fe0) on tqpair(0x164bd10): expected_datao=0, payload_size=1024 00:21:27.226 [2024-04-27 00:05:57.428261] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.428267] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.428271] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.428276] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.226 [2024-04-27 00:05:57.428282] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.226 [2024-04-27 00:05:57.428285] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.226 [2024-04-27 00:05:57.428289] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b4140) on tqpair=0x164bd10 00:21:27.491 [2024-04-27 00:05:57.470047] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.491 [2024-04-27 00:05:57.470058] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.491 [2024-04-27 00:05:57.470061] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.470065] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3fe0) on tqpair=0x164bd10 00:21:27.491 [2024-04-27 00:05:57.470076] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.470081] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x164bd10) 00:21:27.491 [2024-04-27 00:05:57.470088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.491 [2024-04-27 00:05:57.470105] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3fe0, cid 4, qid 0 00:21:27.491 [2024-04-27 00:05:57.470346] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.491 [2024-04-27 00:05:57.470352] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.491 [2024-04-27 00:05:57.470356] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.470359] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x164bd10): datao=0, datal=3072, cccid=4 00:21:27.491 [2024-04-27 00:05:57.470364] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16b3fe0) on tqpair(0x164bd10): expected_datao=0, payload_size=3072 00:21:27.491 [2024-04-27 00:05:57.470368] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.470393] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.470397] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.470581] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.491 [2024-04-27 00:05:57.470587] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.491 [2024-04-27 00:05:57.470590] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.470594] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3fe0) on tqpair=0x164bd10 00:21:27.491 [2024-04-27 00:05:57.470603] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.470607] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x164bd10) 00:21:27.491 [2024-04-27 00:05:57.470614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.491 [2024-04-27 00:05:57.470626] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3fe0, cid 4, qid 0 00:21:27.491 [2024-04-27 00:05:57.470874] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.491 [2024-04-27 00:05:57.470881] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.491 [2024-04-27 00:05:57.470884] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.470888] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x164bd10): datao=0, datal=8, cccid=4 00:21:27.491 [2024-04-27 00:05:57.470892] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16b3fe0) on tqpair(0x164bd10): expected_datao=0, payload_size=8 00:21:27.491 [2024-04-27 00:05:57.470896] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.470903] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.470906] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.513845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.491 [2024-04-27 00:05:57.513854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.491 [2024-04-27 00:05:57.513858] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.491 [2024-04-27 00:05:57.513862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3fe0) on tqpair=0x164bd10 00:21:27.491 ===================================================== 00:21:27.491 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:27.491 ===================================================== 00:21:27.491 Controller Capabilities/Features 00:21:27.491 ================================ 00:21:27.491 Vendor ID: 0000 00:21:27.491 Subsystem Vendor ID: 0000 00:21:27.491 Serial Number: .................... 00:21:27.491 Model Number: ........................................ 00:21:27.491 Firmware Version: 24.05 00:21:27.491 Recommended Arb Burst: 0 00:21:27.491 IEEE OUI Identifier: 00 00 00 00:21:27.491 Multi-path I/O 00:21:27.491 May have multiple subsystem ports: No 00:21:27.491 May have multiple controllers: No 00:21:27.491 Associated with SR-IOV VF: No 00:21:27.491 Max Data Transfer Size: 131072 00:21:27.491 Max Number of Namespaces: 0 00:21:27.491 Max Number of I/O Queues: 1024 00:21:27.491 NVMe Specification Version (VS): 1.3 00:21:27.491 NVMe Specification Version (Identify): 1.3 00:21:27.491 Maximum Queue Entries: 128 00:21:27.491 Contiguous Queues Required: Yes 00:21:27.491 Arbitration Mechanisms Supported 00:21:27.491 Weighted Round Robin: Not Supported 00:21:27.491 Vendor Specific: Not Supported 00:21:27.491 Reset Timeout: 15000 ms 00:21:27.491 Doorbell Stride: 4 bytes 00:21:27.491 NVM Subsystem Reset: Not Supported 00:21:27.491 Command Sets Supported 00:21:27.491 NVM Command Set: Supported 00:21:27.491 Boot Partition: Not Supported 00:21:27.491 Memory Page Size Minimum: 4096 bytes 00:21:27.491 Memory Page Size Maximum: 4096 bytes 00:21:27.491 Persistent Memory Region: Not Supported 00:21:27.491 Optional Asynchronous Events Supported 00:21:27.491 Namespace Attribute Notices: Not Supported 00:21:27.491 Firmware Activation Notices: Not Supported 00:21:27.491 ANA Change Notices: Not Supported 00:21:27.491 PLE Aggregate Log Change Notices: Not Supported 00:21:27.491 LBA Status Info Alert Notices: Not Supported 00:21:27.491 EGE Aggregate Log Change Notices: Not Supported 00:21:27.491 Normal NVM Subsystem Shutdown event: Not Supported 00:21:27.491 Zone Descriptor Change Notices: Not Supported 00:21:27.491 Discovery Log Change Notices: Supported 00:21:27.491 Controller Attributes 00:21:27.491 128-bit Host Identifier: Not Supported 00:21:27.491 Non-Operational Permissive Mode: Not Supported 00:21:27.491 NVM Sets: Not Supported 00:21:27.491 Read Recovery Levels: Not Supported 00:21:27.491 Endurance Groups: Not Supported 00:21:27.491 Predictable Latency Mode: Not Supported 00:21:27.491 Traffic Based Keep ALive: Not Supported 00:21:27.491 Namespace Granularity: Not Supported 00:21:27.491 SQ Associations: Not Supported 00:21:27.492 UUID List: Not Supported 00:21:27.492 Multi-Domain Subsystem: Not Supported 00:21:27.492 Fixed Capacity Management: Not Supported 00:21:27.492 Variable Capacity Management: Not Supported 00:21:27.492 Delete Endurance Group: Not Supported 00:21:27.492 Delete NVM Set: Not Supported 00:21:27.492 Extended LBA Formats Supported: Not Supported 00:21:27.492 Flexible Data Placement Supported: Not Supported 00:21:27.492 00:21:27.492 Controller Memory Buffer Support 00:21:27.492 ================================ 00:21:27.492 Supported: No 00:21:27.492 00:21:27.492 Persistent Memory Region Support 00:21:27.492 ================================ 00:21:27.492 Supported: No 00:21:27.492 00:21:27.492 Admin Command Set Attributes 00:21:27.492 ============================ 00:21:27.492 Security Send/Receive: Not Supported 00:21:27.492 Format NVM: Not Supported 00:21:27.492 Firmware Activate/Download: Not Supported 00:21:27.492 Namespace Management: Not Supported 00:21:27.492 Device Self-Test: Not Supported 00:21:27.492 Directives: Not Supported 00:21:27.492 NVMe-MI: Not Supported 00:21:27.492 Virtualization Management: Not Supported 00:21:27.492 Doorbell Buffer Config: Not Supported 00:21:27.492 Get LBA Status Capability: Not Supported 00:21:27.492 Command & Feature Lockdown Capability: Not Supported 00:21:27.492 Abort Command Limit: 1 00:21:27.492 Async Event Request Limit: 4 00:21:27.492 Number of Firmware Slots: N/A 00:21:27.492 Firmware Slot 1 Read-Only: N/A 00:21:27.492 Firmware Activation Without Reset: N/A 00:21:27.492 Multiple Update Detection Support: N/A 00:21:27.492 Firmware Update Granularity: No Information Provided 00:21:27.492 Per-Namespace SMART Log: No 00:21:27.492 Asymmetric Namespace Access Log Page: Not Supported 00:21:27.492 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:27.492 Command Effects Log Page: Not Supported 00:21:27.492 Get Log Page Extended Data: Supported 00:21:27.492 Telemetry Log Pages: Not Supported 00:21:27.492 Persistent Event Log Pages: Not Supported 00:21:27.492 Supported Log Pages Log Page: May Support 00:21:27.492 Commands Supported & Effects Log Page: Not Supported 00:21:27.492 Feature Identifiers & Effects Log Page:May Support 00:21:27.492 NVMe-MI Commands & Effects Log Page: May Support 00:21:27.492 Data Area 4 for Telemetry Log: Not Supported 00:21:27.492 Error Log Page Entries Supported: 128 00:21:27.492 Keep Alive: Not Supported 00:21:27.492 00:21:27.492 NVM Command Set Attributes 00:21:27.492 ========================== 00:21:27.492 Submission Queue Entry Size 00:21:27.492 Max: 1 00:21:27.492 Min: 1 00:21:27.492 Completion Queue Entry Size 00:21:27.492 Max: 1 00:21:27.492 Min: 1 00:21:27.492 Number of Namespaces: 0 00:21:27.492 Compare Command: Not Supported 00:21:27.492 Write Uncorrectable Command: Not Supported 00:21:27.492 Dataset Management Command: Not Supported 00:21:27.492 Write Zeroes Command: Not Supported 00:21:27.492 Set Features Save Field: Not Supported 00:21:27.492 Reservations: Not Supported 00:21:27.492 Timestamp: Not Supported 00:21:27.492 Copy: Not Supported 00:21:27.492 Volatile Write Cache: Not Present 00:21:27.492 Atomic Write Unit (Normal): 1 00:21:27.492 Atomic Write Unit (PFail): 1 00:21:27.492 Atomic Compare & Write Unit: 1 00:21:27.492 Fused Compare & Write: Supported 00:21:27.492 Scatter-Gather List 00:21:27.492 SGL Command Set: Supported 00:21:27.492 SGL Keyed: Supported 00:21:27.492 SGL Bit Bucket Descriptor: Not Supported 00:21:27.492 SGL Metadata Pointer: Not Supported 00:21:27.492 Oversized SGL: Not Supported 00:21:27.492 SGL Metadata Address: Not Supported 00:21:27.492 SGL Offset: Supported 00:21:27.492 Transport SGL Data Block: Not Supported 00:21:27.492 Replay Protected Memory Block: Not Supported 00:21:27.492 00:21:27.492 Firmware Slot Information 00:21:27.492 ========================= 00:21:27.492 Active slot: 0 00:21:27.492 00:21:27.492 00:21:27.492 Error Log 00:21:27.492 ========= 00:21:27.492 00:21:27.492 Active Namespaces 00:21:27.492 ================= 00:21:27.492 Discovery Log Page 00:21:27.492 ================== 00:21:27.492 Generation Counter: 2 00:21:27.492 Number of Records: 2 00:21:27.492 Record Format: 0 00:21:27.492 00:21:27.492 Discovery Log Entry 0 00:21:27.492 ---------------------- 00:21:27.492 Transport Type: 3 (TCP) 00:21:27.492 Address Family: 1 (IPv4) 00:21:27.492 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:27.492 Entry Flags: 00:21:27.492 Duplicate Returned Information: 1 00:21:27.492 Explicit Persistent Connection Support for Discovery: 1 00:21:27.492 Transport Requirements: 00:21:27.492 Secure Channel: Not Required 00:21:27.492 Port ID: 0 (0x0000) 00:21:27.492 Controller ID: 65535 (0xffff) 00:21:27.492 Admin Max SQ Size: 128 00:21:27.492 Transport Service Identifier: 4420 00:21:27.492 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:27.492 Transport Address: 10.0.0.2 00:21:27.492 Discovery Log Entry 1 00:21:27.492 ---------------------- 00:21:27.492 Transport Type: 3 (TCP) 00:21:27.492 Address Family: 1 (IPv4) 00:21:27.492 Subsystem Type: 2 (NVM Subsystem) 00:21:27.492 Entry Flags: 00:21:27.492 Duplicate Returned Information: 0 00:21:27.492 Explicit Persistent Connection Support for Discovery: 0 00:21:27.492 Transport Requirements: 00:21:27.492 Secure Channel: Not Required 00:21:27.492 Port ID: 0 (0x0000) 00:21:27.492 Controller ID: 65535 (0xffff) 00:21:27.492 Admin Max SQ Size: 128 00:21:27.492 Transport Service Identifier: 4420 00:21:27.492 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:27.492 Transport Address: 10.0.0.2 [2024-04-27 00:05:57.513951] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:27.492 [2024-04-27 00:05:57.513964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.492 [2024-04-27 00:05:57.513971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.492 [2024-04-27 00:05:57.513977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.492 [2024-04-27 00:05:57.513983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.492 [2024-04-27 00:05:57.513993] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.492 [2024-04-27 00:05:57.513997] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.492 [2024-04-27 00:05:57.514001] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.492 [2024-04-27 00:05:57.514008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.492 [2024-04-27 00:05:57.514022] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.492 [2024-04-27 00:05:57.514116] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.492 [2024-04-27 00:05:57.514122] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.492 [2024-04-27 00:05:57.514126] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.492 [2024-04-27 00:05:57.514129] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.492 [2024-04-27 00:05:57.514137] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.492 [2024-04-27 00:05:57.514141] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.492 [2024-04-27 00:05:57.514144] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.492 [2024-04-27 00:05:57.514151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.492 [2024-04-27 00:05:57.514163] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.492 [2024-04-27 00:05:57.514381] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.492 [2024-04-27 00:05:57.514387] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.492 [2024-04-27 00:05:57.514391] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.492 [2024-04-27 00:05:57.514395] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.492 [2024-04-27 00:05:57.514400] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:27.492 [2024-04-27 00:05:57.514404] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:27.492 [2024-04-27 00:05:57.514413] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.492 [2024-04-27 00:05:57.514417] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.492 [2024-04-27 00:05:57.514421] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.493 [2024-04-27 00:05:57.514427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.493 [2024-04-27 00:05:57.514437] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.493 [2024-04-27 00:05:57.514631] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.493 [2024-04-27 00:05:57.514637] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.493 [2024-04-27 00:05:57.514640] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.514644] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.493 [2024-04-27 00:05:57.514654] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.514658] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.514662] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.493 [2024-04-27 00:05:57.514668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.493 [2024-04-27 00:05:57.514678] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.493 [2024-04-27 00:05:57.514876] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.493 [2024-04-27 00:05:57.514883] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.493 [2024-04-27 00:05:57.514888] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.514892] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.493 [2024-04-27 00:05:57.514902] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.514906] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.514909] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.493 [2024-04-27 00:05:57.514916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.493 [2024-04-27 00:05:57.514926] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.493 [2024-04-27 00:05:57.515143] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.493 [2024-04-27 00:05:57.515150] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.493 [2024-04-27 00:05:57.515153] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515157] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.493 [2024-04-27 00:05:57.515167] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515171] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515174] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.493 [2024-04-27 00:05:57.515181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.493 [2024-04-27 00:05:57.515190] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.493 [2024-04-27 00:05:57.515368] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.493 [2024-04-27 00:05:57.515374] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.493 [2024-04-27 00:05:57.515378] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515381] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.493 [2024-04-27 00:05:57.515391] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515395] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515399] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.493 [2024-04-27 00:05:57.515405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.493 [2024-04-27 00:05:57.515415] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.493 [2024-04-27 00:05:57.515604] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.493 [2024-04-27 00:05:57.515610] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.493 [2024-04-27 00:05:57.515614] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515617] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.493 [2024-04-27 00:05:57.515628] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515631] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515635] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.493 [2024-04-27 00:05:57.515641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.493 [2024-04-27 00:05:57.515651] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.493 [2024-04-27 00:05:57.515847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.493 [2024-04-27 00:05:57.515854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.493 [2024-04-27 00:05:57.515857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.493 [2024-04-27 00:05:57.515873] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515877] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.515880] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.493 [2024-04-27 00:05:57.515887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.493 [2024-04-27 00:05:57.515897] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.493 [2024-04-27 00:05:57.516115] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.493 [2024-04-27 00:05:57.516121] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.493 [2024-04-27 00:05:57.516124] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516128] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.493 [2024-04-27 00:05:57.516138] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516142] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516145] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.493 [2024-04-27 00:05:57.516152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.493 [2024-04-27 00:05:57.516161] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.493 [2024-04-27 00:05:57.516361] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.493 [2024-04-27 00:05:57.516367] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.493 [2024-04-27 00:05:57.516370] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516374] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.493 [2024-04-27 00:05:57.516384] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516388] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516391] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.493 [2024-04-27 00:05:57.516398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.493 [2024-04-27 00:05:57.516407] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.493 [2024-04-27 00:05:57.516594] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.493 [2024-04-27 00:05:57.516601] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.493 [2024-04-27 00:05:57.516604] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516608] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.493 [2024-04-27 00:05:57.516618] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516621] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516625] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.493 [2024-04-27 00:05:57.516632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.493 [2024-04-27 00:05:57.516641] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.493 [2024-04-27 00:05:57.516834] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.493 [2024-04-27 00:05:57.516844] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.493 [2024-04-27 00:05:57.516847] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516851] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.493 [2024-04-27 00:05:57.516863] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516867] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.516870] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.493 [2024-04-27 00:05:57.516877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.493 [2024-04-27 00:05:57.516887] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.493 [2024-04-27 00:05:57.517104] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.493 [2024-04-27 00:05:57.517111] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.493 [2024-04-27 00:05:57.517114] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.517118] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.493 [2024-04-27 00:05:57.517128] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.517132] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.493 [2024-04-27 00:05:57.517135] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.494 [2024-04-27 00:05:57.517142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.494 [2024-04-27 00:05:57.517151] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.494 [2024-04-27 00:05:57.517353] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.494 [2024-04-27 00:05:57.517360] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.494 [2024-04-27 00:05:57.517363] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.517367] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.494 [2024-04-27 00:05:57.517377] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.517380] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.517384] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.494 [2024-04-27 00:05:57.517391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.494 [2024-04-27 00:05:57.517400] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.494 [2024-04-27 00:05:57.517574] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.494 [2024-04-27 00:05:57.517581] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.494 [2024-04-27 00:05:57.517584] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.517588] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.494 [2024-04-27 00:05:57.517598] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.517601] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.517605] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.494 [2024-04-27 00:05:57.517612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.494 [2024-04-27 00:05:57.517621] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.494 [2024-04-27 00:05:57.517805] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.494 [2024-04-27 00:05:57.517811] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.494 [2024-04-27 00:05:57.517814] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.517818] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.494 [2024-04-27 00:05:57.517828] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.517833] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.517840] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.494 [2024-04-27 00:05:57.517847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.494 [2024-04-27 00:05:57.517857] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.494 [2024-04-27 00:05:57.518078] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.494 [2024-04-27 00:05:57.518084] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.494 [2024-04-27 00:05:57.518088] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.518091] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.494 [2024-04-27 00:05:57.518101] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.518105] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.518109] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.494 [2024-04-27 00:05:57.518115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.494 [2024-04-27 00:05:57.518125] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.494 [2024-04-27 00:05:57.518345] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.494 [2024-04-27 00:05:57.518351] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.494 [2024-04-27 00:05:57.518355] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.518358] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.494 [2024-04-27 00:05:57.518369] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.518373] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.518376] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.494 [2024-04-27 00:05:57.518383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.494 [2024-04-27 00:05:57.518392] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.494 [2024-04-27 00:05:57.518591] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.494 [2024-04-27 00:05:57.518597] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.494 [2024-04-27 00:05:57.518601] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.518604] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.494 [2024-04-27 00:05:57.518614] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.518618] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.518622] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.494 [2024-04-27 00:05:57.518628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.494 [2024-04-27 00:05:57.518638] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.494 [2024-04-27 00:05:57.518806] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.494 [2024-04-27 00:05:57.518813] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.494 [2024-04-27 00:05:57.518816] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.518820] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.494 [2024-04-27 00:05:57.518830] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.518835] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.522844] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164bd10) 00:21:27.494 [2024-04-27 00:05:57.522852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.494 [2024-04-27 00:05:57.522863] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b3e80, cid 3, qid 0 00:21:27.494 [2024-04-27 00:05:57.523077] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.494 [2024-04-27 00:05:57.523083] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.494 [2024-04-27 00:05:57.523087] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.523091] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16b3e80) on tqpair=0x164bd10 00:21:27.494 [2024-04-27 00:05:57.523099] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:21:27.494 00:21:27.494 00:05:57 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:27.494 [2024-04-27 00:05:57.561954] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:21:27.494 [2024-04-27 00:05:57.561991] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469900 ] 00:21:27.494 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.494 [2024-04-27 00:05:57.595380] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:27.494 [2024-04-27 00:05:57.595426] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:27.494 [2024-04-27 00:05:57.595431] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:27.494 [2024-04-27 00:05:57.595443] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:27.494 [2024-04-27 00:05:57.595451] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:27.494 [2024-04-27 00:05:57.598866] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:27.494 [2024-04-27 00:05:57.598896] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16b3d10 0 00:21:27.494 [2024-04-27 00:05:57.606845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:27.494 [2024-04-27 00:05:57.606854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:27.494 [2024-04-27 00:05:57.606859] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:27.494 [2024-04-27 00:05:57.606862] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:27.494 [2024-04-27 00:05:57.606893] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.606898] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.606902] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b3d10) 00:21:27.494 [2024-04-27 00:05:57.606914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:27.494 [2024-04-27 00:05:57.606929] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ba60, cid 0, qid 0 00:21:27.494 [2024-04-27 00:05:57.614846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.494 [2024-04-27 00:05:57.614855] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.494 [2024-04-27 00:05:57.614858] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.494 [2024-04-27 00:05:57.614866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171ba60) on tqpair=0x16b3d10 00:21:27.494 [2024-04-27 00:05:57.614879] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:27.495 [2024-04-27 00:05:57.614885] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:27.495 [2024-04-27 00:05:57.614891] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:27.495 [2024-04-27 00:05:57.614903] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.614907] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.614910] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b3d10) 00:21:27.495 [2024-04-27 00:05:57.614918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.495 [2024-04-27 00:05:57.614930] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ba60, cid 0, qid 0 00:21:27.495 [2024-04-27 00:05:57.615118] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.495 [2024-04-27 00:05:57.615124] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.495 [2024-04-27 00:05:57.615128] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.615132] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171ba60) on tqpair=0x16b3d10 00:21:27.495 [2024-04-27 00:05:57.615138] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:27.495 [2024-04-27 00:05:57.615145] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:27.495 [2024-04-27 00:05:57.615152] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.615156] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.615159] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b3d10) 00:21:27.495 [2024-04-27 00:05:57.615166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.495 [2024-04-27 00:05:57.615176] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ba60, cid 0, qid 0 00:21:27.495 [2024-04-27 00:05:57.615395] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.495 [2024-04-27 00:05:57.615402] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.495 [2024-04-27 00:05:57.615405] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.615409] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171ba60) on tqpair=0x16b3d10 00:21:27.495 [2024-04-27 00:05:57.615415] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:27.495 [2024-04-27 00:05:57.615422] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:27.495 [2024-04-27 00:05:57.615429] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.615433] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.615437] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b3d10) 00:21:27.495 [2024-04-27 00:05:57.615443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.495 [2024-04-27 00:05:57.615453] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ba60, cid 0, qid 0 00:21:27.495 [2024-04-27 00:05:57.615649] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.495 [2024-04-27 00:05:57.615656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.495 [2024-04-27 00:05:57.615659] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.615663] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171ba60) on tqpair=0x16b3d10 00:21:27.495 [2024-04-27 00:05:57.615671] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:27.495 [2024-04-27 00:05:57.615680] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.615684] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.615688] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b3d10) 00:21:27.495 [2024-04-27 00:05:57.615694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.495 [2024-04-27 00:05:57.615704] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ba60, cid 0, qid 0 00:21:27.495 [2024-04-27 00:05:57.615901] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.495 [2024-04-27 00:05:57.615908] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.495 [2024-04-27 00:05:57.615912] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.615915] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171ba60) on tqpair=0x16b3d10 00:21:27.495 [2024-04-27 00:05:57.615921] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:27.495 [2024-04-27 00:05:57.615926] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:27.495 [2024-04-27 00:05:57.615933] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:27.495 [2024-04-27 00:05:57.616039] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:27.495 [2024-04-27 00:05:57.616043] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:27.495 [2024-04-27 00:05:57.616051] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.616055] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.616058] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b3d10) 00:21:27.495 [2024-04-27 00:05:57.616065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.495 [2024-04-27 00:05:57.616075] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ba60, cid 0, qid 0 00:21:27.495 [2024-04-27 00:05:57.616271] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.495 [2024-04-27 00:05:57.616278] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.495 [2024-04-27 00:05:57.616281] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.616285] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171ba60) on tqpair=0x16b3d10 00:21:27.495 [2024-04-27 00:05:57.616291] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:27.495 [2024-04-27 00:05:57.616300] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.616304] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.616307] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b3d10) 00:21:27.495 [2024-04-27 00:05:57.616314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.495 [2024-04-27 00:05:57.616324] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ba60, cid 0, qid 0 00:21:27.495 [2024-04-27 00:05:57.616552] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.495 [2024-04-27 00:05:57.616558] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.495 [2024-04-27 00:05:57.616562] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.616568] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171ba60) on tqpair=0x16b3d10 00:21:27.495 [2024-04-27 00:05:57.616573] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:27.495 [2024-04-27 00:05:57.616578] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:27.495 [2024-04-27 00:05:57.616586] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:27.495 [2024-04-27 00:05:57.616593] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:27.495 [2024-04-27 00:05:57.616603] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.616607] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b3d10) 00:21:27.495 [2024-04-27 00:05:57.616614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.495 [2024-04-27 00:05:57.616624] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ba60, cid 0, qid 0 00:21:27.495 [2024-04-27 00:05:57.616840] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.495 [2024-04-27 00:05:57.616847] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.495 [2024-04-27 00:05:57.616851] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.616855] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b3d10): datao=0, datal=4096, cccid=0 00:21:27.495 [2024-04-27 00:05:57.616859] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171ba60) on tqpair(0x16b3d10): expected_datao=0, payload_size=4096 00:21:27.495 [2024-04-27 00:05:57.616864] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.616871] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.616875] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.617055] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.495 [2024-04-27 00:05:57.617062] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.495 [2024-04-27 00:05:57.617066] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.495 [2024-04-27 00:05:57.617069] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171ba60) on tqpair=0x16b3d10 00:21:27.495 [2024-04-27 00:05:57.617078] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:27.495 [2024-04-27 00:05:57.617083] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:27.495 [2024-04-27 00:05:57.617087] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:27.495 [2024-04-27 00:05:57.617092] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:27.495 [2024-04-27 00:05:57.617096] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:27.495 [2024-04-27 00:05:57.617101] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:27.496 [2024-04-27 00:05:57.617109] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:27.496 [2024-04-27 00:05:57.617116] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617120] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617124] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b3d10) 00:21:27.496 [2024-04-27 00:05:57.617131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:27.496 [2024-04-27 00:05:57.617144] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ba60, cid 0, qid 0 00:21:27.496 [2024-04-27 00:05:57.617357] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.496 [2024-04-27 00:05:57.617364] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.496 [2024-04-27 00:05:57.617367] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617371] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171ba60) on tqpair=0x16b3d10 00:21:27.496 [2024-04-27 00:05:57.617378] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617382] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617386] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b3d10) 00:21:27.496 [2024-04-27 00:05:57.617392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.496 [2024-04-27 00:05:57.617398] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617402] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617405] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16b3d10) 00:21:27.496 [2024-04-27 00:05:57.617411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.496 [2024-04-27 00:05:57.617417] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617421] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617424] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16b3d10) 00:21:27.496 [2024-04-27 00:05:57.617430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.496 [2024-04-27 00:05:57.617436] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617440] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617443] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b3d10) 00:21:27.496 [2024-04-27 00:05:57.617449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.496 [2024-04-27 00:05:57.617454] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:27.496 [2024-04-27 00:05:57.617464] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:27.496 [2024-04-27 00:05:57.617471] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617475] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b3d10) 00:21:27.496 [2024-04-27 00:05:57.617481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.496 [2024-04-27 00:05:57.617493] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ba60, cid 0, qid 0 00:21:27.496 [2024-04-27 00:05:57.617498] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171bbc0, cid 1, qid 0 00:21:27.496 [2024-04-27 00:05:57.617503] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171bd20, cid 2, qid 0 00:21:27.496 [2024-04-27 00:05:57.617507] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171be80, cid 3, qid 0 00:21:27.496 [2024-04-27 00:05:57.617512] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171bfe0, cid 4, qid 0 00:21:27.496 [2024-04-27 00:05:57.617687] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.496 [2024-04-27 00:05:57.617694] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.496 [2024-04-27 00:05:57.617697] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617703] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171bfe0) on tqpair=0x16b3d10 00:21:27.496 [2024-04-27 00:05:57.617709] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:27.496 [2024-04-27 00:05:57.617714] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:27.496 [2024-04-27 00:05:57.617723] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:27.496 [2024-04-27 00:05:57.617729] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:27.496 [2024-04-27 00:05:57.617736] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617740] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617743] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b3d10) 00:21:27.496 [2024-04-27 00:05:57.617750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:27.496 [2024-04-27 00:05:57.617759] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171bfe0, cid 4, qid 0 00:21:27.496 [2024-04-27 00:05:57.617929] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.496 [2024-04-27 00:05:57.617936] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.496 [2024-04-27 00:05:57.617940] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.617943] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171bfe0) on tqpair=0x16b3d10 00:21:27.496 [2024-04-27 00:05:57.617996] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:27.496 [2024-04-27 00:05:57.618005] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:27.496 [2024-04-27 00:05:57.618013] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.618017] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b3d10) 00:21:27.496 [2024-04-27 00:05:57.618023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.496 [2024-04-27 00:05:57.618033] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171bfe0, cid 4, qid 0 00:21:27.496 [2024-04-27 00:05:57.618245] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.496 [2024-04-27 00:05:57.618252] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.496 [2024-04-27 00:05:57.618255] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.618259] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b3d10): datao=0, datal=4096, cccid=4 00:21:27.496 [2024-04-27 00:05:57.618263] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171bfe0) on tqpair(0x16b3d10): expected_datao=0, payload_size=4096 00:21:27.496 [2024-04-27 00:05:57.618268] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.618274] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.618278] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.618428] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.496 [2024-04-27 00:05:57.618434] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.496 [2024-04-27 00:05:57.618438] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.618441] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171bfe0) on tqpair=0x16b3d10 00:21:27.496 [2024-04-27 00:05:57.618451] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:27.496 [2024-04-27 00:05:57.618466] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:27.496 [2024-04-27 00:05:57.618475] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:27.496 [2024-04-27 00:05:57.618482] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.496 [2024-04-27 00:05:57.618486] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b3d10) 00:21:27.496 [2024-04-27 00:05:57.618493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.496 [2024-04-27 00:05:57.618503] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171bfe0, cid 4, qid 0 00:21:27.497 [2024-04-27 00:05:57.618748] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.497 [2024-04-27 00:05:57.618754] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.497 [2024-04-27 00:05:57.618758] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.618761] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b3d10): datao=0, datal=4096, cccid=4 00:21:27.497 [2024-04-27 00:05:57.618766] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171bfe0) on tqpair(0x16b3d10): expected_datao=0, payload_size=4096 00:21:27.497 [2024-04-27 00:05:57.618770] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.618777] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.618781] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.622845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.497 [2024-04-27 00:05:57.622853] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.497 [2024-04-27 00:05:57.622856] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.622860] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171bfe0) on tqpair=0x16b3d10 00:21:27.497 [2024-04-27 00:05:57.622873] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:27.497 [2024-04-27 00:05:57.622882] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:27.497 [2024-04-27 00:05:57.622890] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.622894] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b3d10) 00:21:27.497 [2024-04-27 00:05:57.622900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.497 [2024-04-27 00:05:57.622912] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171bfe0, cid 4, qid 0 00:21:27.497 [2024-04-27 00:05:57.623104] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.497 [2024-04-27 00:05:57.623110] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.497 [2024-04-27 00:05:57.623114] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623118] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b3d10): datao=0, datal=4096, cccid=4 00:21:27.497 [2024-04-27 00:05:57.623122] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171bfe0) on tqpair(0x16b3d10): expected_datao=0, payload_size=4096 00:21:27.497 [2024-04-27 00:05:57.623126] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623133] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623137] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623279] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.497 [2024-04-27 00:05:57.623286] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.497 [2024-04-27 00:05:57.623289] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623297] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171bfe0) on tqpair=0x16b3d10 00:21:27.497 [2024-04-27 00:05:57.623305] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:27.497 [2024-04-27 00:05:57.623313] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:27.497 [2024-04-27 00:05:57.623321] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:27.497 [2024-04-27 00:05:57.623327] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:27.497 [2024-04-27 00:05:57.623332] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:27.497 [2024-04-27 00:05:57.623337] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:27.497 [2024-04-27 00:05:57.623341] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:27.497 [2024-04-27 00:05:57.623346] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:27.497 [2024-04-27 00:05:57.623360] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623364] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b3d10) 00:21:27.497 [2024-04-27 00:05:57.623370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.497 [2024-04-27 00:05:57.623377] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623381] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623384] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b3d10) 00:21:27.497 [2024-04-27 00:05:57.623390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.497 [2024-04-27 00:05:57.623403] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171bfe0, cid 4, qid 0 00:21:27.497 [2024-04-27 00:05:57.623408] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c140, cid 5, qid 0 00:21:27.497 [2024-04-27 00:05:57.623582] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.497 [2024-04-27 00:05:57.623588] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.497 [2024-04-27 00:05:57.623592] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623596] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171bfe0) on tqpair=0x16b3d10 00:21:27.497 [2024-04-27 00:05:57.623603] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.497 [2024-04-27 00:05:57.623609] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.497 [2024-04-27 00:05:57.623612] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623616] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171c140) on tqpair=0x16b3d10 00:21:27.497 [2024-04-27 00:05:57.623626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623630] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b3d10) 00:21:27.497 [2024-04-27 00:05:57.623636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.497 [2024-04-27 00:05:57.623645] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c140, cid 5, qid 0 00:21:27.497 [2024-04-27 00:05:57.623834] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.497 [2024-04-27 00:05:57.623845] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.497 [2024-04-27 00:05:57.623851] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623855] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171c140) on tqpair=0x16b3d10 00:21:27.497 [2024-04-27 00:05:57.623865] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.623869] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b3d10) 00:21:27.497 [2024-04-27 00:05:57.623875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.497 [2024-04-27 00:05:57.623885] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c140, cid 5, qid 0 00:21:27.497 [2024-04-27 00:05:57.624087] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.497 [2024-04-27 00:05:57.624094] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.497 [2024-04-27 00:05:57.624097] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.624101] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171c140) on tqpair=0x16b3d10 00:21:27.497 [2024-04-27 00:05:57.624110] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.624114] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b3d10) 00:21:27.497 [2024-04-27 00:05:57.624121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.497 [2024-04-27 00:05:57.624130] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c140, cid 5, qid 0 00:21:27.497 [2024-04-27 00:05:57.624349] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.497 [2024-04-27 00:05:57.624356] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.497 [2024-04-27 00:05:57.624359] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.624363] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171c140) on tqpair=0x16b3d10 00:21:27.497 [2024-04-27 00:05:57.624375] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.624379] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b3d10) 00:21:27.497 [2024-04-27 00:05:57.624385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.497 [2024-04-27 00:05:57.624393] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.624396] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b3d10) 00:21:27.497 [2024-04-27 00:05:57.624403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.497 [2024-04-27 00:05:57.624410] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.624414] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x16b3d10) 00:21:27.497 [2024-04-27 00:05:57.624420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.497 [2024-04-27 00:05:57.624428] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.497 [2024-04-27 00:05:57.624432] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16b3d10) 00:21:27.497 [2024-04-27 00:05:57.624438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.497 [2024-04-27 00:05:57.624449] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c140, cid 5, qid 0 00:21:27.497 [2024-04-27 00:05:57.624454] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171bfe0, cid 4, qid 0 00:21:27.498 [2024-04-27 00:05:57.624459] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c2a0, cid 6, qid 0 00:21:27.498 [2024-04-27 00:05:57.624465] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c400, cid 7, qid 0 00:21:27.498 [2024-04-27 00:05:57.624713] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.498 [2024-04-27 00:05:57.624720] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.498 [2024-04-27 00:05:57.624724] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624727] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b3d10): datao=0, datal=8192, cccid=5 00:21:27.498 [2024-04-27 00:05:57.624732] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171c140) on tqpair(0x16b3d10): expected_datao=0, payload_size=8192 00:21:27.498 [2024-04-27 00:05:57.624736] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624805] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624810] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624816] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.498 [2024-04-27 00:05:57.624821] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.498 [2024-04-27 00:05:57.624825] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624828] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b3d10): datao=0, datal=512, cccid=4 00:21:27.498 [2024-04-27 00:05:57.624833] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171bfe0) on tqpair(0x16b3d10): expected_datao=0, payload_size=512 00:21:27.498 [2024-04-27 00:05:57.624841] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624847] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624851] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624857] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.498 [2024-04-27 00:05:57.624862] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.498 [2024-04-27 00:05:57.624866] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624869] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b3d10): datao=0, datal=512, cccid=6 00:21:27.498 [2024-04-27 00:05:57.624874] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171c2a0) on tqpair(0x16b3d10): expected_datao=0, payload_size=512 00:21:27.498 [2024-04-27 00:05:57.624878] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624885] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624888] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624894] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.498 [2024-04-27 00:05:57.624900] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.498 [2024-04-27 00:05:57.624903] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624907] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b3d10): datao=0, datal=4096, cccid=7 00:21:27.498 [2024-04-27 00:05:57.624911] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171c400) on tqpair(0x16b3d10): expected_datao=0, payload_size=4096 00:21:27.498 [2024-04-27 00:05:57.624915] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624940] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.624944] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.625159] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.498 [2024-04-27 00:05:57.625165] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.498 [2024-04-27 00:05:57.625169] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.625173] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171c140) on tqpair=0x16b3d10 00:21:27.498 [2024-04-27 00:05:57.625186] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.498 [2024-04-27 00:05:57.625194] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.498 [2024-04-27 00:05:57.625198] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.625202] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171bfe0) on tqpair=0x16b3d10 00:21:27.498 [2024-04-27 00:05:57.625211] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.498 [2024-04-27 00:05:57.625217] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.498 [2024-04-27 00:05:57.625221] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.625225] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171c2a0) on tqpair=0x16b3d10 00:21:27.498 [2024-04-27 00:05:57.625232] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.498 [2024-04-27 00:05:57.625239] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.498 [2024-04-27 00:05:57.625242] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.498 [2024-04-27 00:05:57.625246] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171c400) on tqpair=0x16b3d10 00:21:27.498 ===================================================== 00:21:27.498 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.498 ===================================================== 00:21:27.498 Controller Capabilities/Features 00:21:27.498 ================================ 00:21:27.498 Vendor ID: 8086 00:21:27.498 Subsystem Vendor ID: 8086 00:21:27.498 Serial Number: SPDK00000000000001 00:21:27.498 Model Number: SPDK bdev Controller 00:21:27.498 Firmware Version: 24.05 00:21:27.498 Recommended Arb Burst: 6 00:21:27.498 IEEE OUI Identifier: e4 d2 5c 00:21:27.498 Multi-path I/O 00:21:27.498 May have multiple subsystem ports: Yes 00:21:27.498 May have multiple controllers: Yes 00:21:27.498 Associated with SR-IOV VF: No 00:21:27.498 Max Data Transfer Size: 131072 00:21:27.498 Max Number of Namespaces: 32 00:21:27.498 Max Number of I/O Queues: 127 00:21:27.498 NVMe Specification Version (VS): 1.3 00:21:27.498 NVMe Specification Version (Identify): 1.3 00:21:27.498 Maximum Queue Entries: 128 00:21:27.498 Contiguous Queues Required: Yes 00:21:27.498 Arbitration Mechanisms Supported 00:21:27.498 Weighted Round Robin: Not Supported 00:21:27.498 Vendor Specific: Not Supported 00:21:27.498 Reset Timeout: 15000 ms 00:21:27.498 Doorbell Stride: 4 bytes 00:21:27.498 NVM Subsystem Reset: Not Supported 00:21:27.498 Command Sets Supported 00:21:27.498 NVM Command Set: Supported 00:21:27.498 Boot Partition: Not Supported 00:21:27.498 Memory Page Size Minimum: 4096 bytes 00:21:27.498 Memory Page Size Maximum: 4096 bytes 00:21:27.498 Persistent Memory Region: Not Supported 00:21:27.498 Optional Asynchronous Events Supported 00:21:27.498 Namespace Attribute Notices: Supported 00:21:27.498 Firmware Activation Notices: Not Supported 00:21:27.498 ANA Change Notices: Not Supported 00:21:27.498 PLE Aggregate Log Change Notices: Not Supported 00:21:27.498 LBA Status Info Alert Notices: Not Supported 00:21:27.498 EGE Aggregate Log Change Notices: Not Supported 00:21:27.498 Normal NVM Subsystem Shutdown event: Not Supported 00:21:27.498 Zone Descriptor Change Notices: Not Supported 00:21:27.498 Discovery Log Change Notices: Not Supported 00:21:27.498 Controller Attributes 00:21:27.498 128-bit Host Identifier: Supported 00:21:27.498 Non-Operational Permissive Mode: Not Supported 00:21:27.498 NVM Sets: Not Supported 00:21:27.498 Read Recovery Levels: Not Supported 00:21:27.498 Endurance Groups: Not Supported 00:21:27.498 Predictable Latency Mode: Not Supported 00:21:27.498 Traffic Based Keep ALive: Not Supported 00:21:27.498 Namespace Granularity: Not Supported 00:21:27.498 SQ Associations: Not Supported 00:21:27.498 UUID List: Not Supported 00:21:27.498 Multi-Domain Subsystem: Not Supported 00:21:27.498 Fixed Capacity Management: Not Supported 00:21:27.498 Variable Capacity Management: Not Supported 00:21:27.498 Delete Endurance Group: Not Supported 00:21:27.498 Delete NVM Set: Not Supported 00:21:27.498 Extended LBA Formats Supported: Not Supported 00:21:27.498 Flexible Data Placement Supported: Not Supported 00:21:27.498 00:21:27.498 Controller Memory Buffer Support 00:21:27.498 ================================ 00:21:27.498 Supported: No 00:21:27.498 00:21:27.498 Persistent Memory Region Support 00:21:27.498 ================================ 00:21:27.498 Supported: No 00:21:27.498 00:21:27.498 Admin Command Set Attributes 00:21:27.498 ============================ 00:21:27.498 Security Send/Receive: Not Supported 00:21:27.498 Format NVM: Not Supported 00:21:27.498 Firmware Activate/Download: Not Supported 00:21:27.498 Namespace Management: Not Supported 00:21:27.498 Device Self-Test: Not Supported 00:21:27.498 Directives: Not Supported 00:21:27.498 NVMe-MI: Not Supported 00:21:27.498 Virtualization Management: Not Supported 00:21:27.498 Doorbell Buffer Config: Not Supported 00:21:27.498 Get LBA Status Capability: Not Supported 00:21:27.498 Command & Feature Lockdown Capability: Not Supported 00:21:27.498 Abort Command Limit: 4 00:21:27.498 Async Event Request Limit: 4 00:21:27.498 Number of Firmware Slots: N/A 00:21:27.498 Firmware Slot 1 Read-Only: N/A 00:21:27.498 Firmware Activation Without Reset: N/A 00:21:27.498 Multiple Update Detection Support: N/A 00:21:27.498 Firmware Update Granularity: No Information Provided 00:21:27.498 Per-Namespace SMART Log: No 00:21:27.498 Asymmetric Namespace Access Log Page: Not Supported 00:21:27.498 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:27.499 Command Effects Log Page: Supported 00:21:27.499 Get Log Page Extended Data: Supported 00:21:27.499 Telemetry Log Pages: Not Supported 00:21:27.499 Persistent Event Log Pages: Not Supported 00:21:27.499 Supported Log Pages Log Page: May Support 00:21:27.499 Commands Supported & Effects Log Page: Not Supported 00:21:27.499 Feature Identifiers & Effects Log Page:May Support 00:21:27.499 NVMe-MI Commands & Effects Log Page: May Support 00:21:27.499 Data Area 4 for Telemetry Log: Not Supported 00:21:27.499 Error Log Page Entries Supported: 128 00:21:27.499 Keep Alive: Supported 00:21:27.499 Keep Alive Granularity: 10000 ms 00:21:27.499 00:21:27.499 NVM Command Set Attributes 00:21:27.499 ========================== 00:21:27.499 Submission Queue Entry Size 00:21:27.499 Max: 64 00:21:27.499 Min: 64 00:21:27.499 Completion Queue Entry Size 00:21:27.499 Max: 16 00:21:27.499 Min: 16 00:21:27.499 Number of Namespaces: 32 00:21:27.499 Compare Command: Supported 00:21:27.499 Write Uncorrectable Command: Not Supported 00:21:27.499 Dataset Management Command: Supported 00:21:27.499 Write Zeroes Command: Supported 00:21:27.499 Set Features Save Field: Not Supported 00:21:27.499 Reservations: Supported 00:21:27.499 Timestamp: Not Supported 00:21:27.499 Copy: Supported 00:21:27.499 Volatile Write Cache: Present 00:21:27.499 Atomic Write Unit (Normal): 1 00:21:27.499 Atomic Write Unit (PFail): 1 00:21:27.499 Atomic Compare & Write Unit: 1 00:21:27.499 Fused Compare & Write: Supported 00:21:27.499 Scatter-Gather List 00:21:27.499 SGL Command Set: Supported 00:21:27.499 SGL Keyed: Supported 00:21:27.499 SGL Bit Bucket Descriptor: Not Supported 00:21:27.499 SGL Metadata Pointer: Not Supported 00:21:27.499 Oversized SGL: Not Supported 00:21:27.499 SGL Metadata Address: Not Supported 00:21:27.499 SGL Offset: Supported 00:21:27.499 Transport SGL Data Block: Not Supported 00:21:27.499 Replay Protected Memory Block: Not Supported 00:21:27.499 00:21:27.499 Firmware Slot Information 00:21:27.499 ========================= 00:21:27.499 Active slot: 1 00:21:27.499 Slot 1 Firmware Revision: 24.05 00:21:27.499 00:21:27.499 00:21:27.499 Commands Supported and Effects 00:21:27.499 ============================== 00:21:27.499 Admin Commands 00:21:27.499 -------------- 00:21:27.499 Get Log Page (02h): Supported 00:21:27.499 Identify (06h): Supported 00:21:27.499 Abort (08h): Supported 00:21:27.499 Set Features (09h): Supported 00:21:27.499 Get Features (0Ah): Supported 00:21:27.499 Asynchronous Event Request (0Ch): Supported 00:21:27.499 Keep Alive (18h): Supported 00:21:27.499 I/O Commands 00:21:27.499 ------------ 00:21:27.499 Flush (00h): Supported LBA-Change 00:21:27.499 Write (01h): Supported LBA-Change 00:21:27.499 Read (02h): Supported 00:21:27.499 Compare (05h): Supported 00:21:27.499 Write Zeroes (08h): Supported LBA-Change 00:21:27.499 Dataset Management (09h): Supported LBA-Change 00:21:27.499 Copy (19h): Supported LBA-Change 00:21:27.499 Unknown (79h): Supported LBA-Change 00:21:27.499 Unknown (7Ah): Supported 00:21:27.499 00:21:27.499 Error Log 00:21:27.499 ========= 00:21:27.499 00:21:27.499 Arbitration 00:21:27.499 =========== 00:21:27.499 Arbitration Burst: 1 00:21:27.499 00:21:27.499 Power Management 00:21:27.499 ================ 00:21:27.499 Number of Power States: 1 00:21:27.499 Current Power State: Power State #0 00:21:27.499 Power State #0: 00:21:27.499 Max Power: 0.00 W 00:21:27.499 Non-Operational State: Operational 00:21:27.499 Entry Latency: Not Reported 00:21:27.499 Exit Latency: Not Reported 00:21:27.499 Relative Read Throughput: 0 00:21:27.499 Relative Read Latency: 0 00:21:27.499 Relative Write Throughput: 0 00:21:27.499 Relative Write Latency: 0 00:21:27.499 Idle Power: Not Reported 00:21:27.499 Active Power: Not Reported 00:21:27.499 Non-Operational Permissive Mode: Not Supported 00:21:27.499 00:21:27.499 Health Information 00:21:27.499 ================== 00:21:27.499 Critical Warnings: 00:21:27.499 Available Spare Space: OK 00:21:27.499 Temperature: OK 00:21:27.499 Device Reliability: OK 00:21:27.499 Read Only: No 00:21:27.499 Volatile Memory Backup: OK 00:21:27.499 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:27.499 Temperature Threshold: [2024-04-27 00:05:57.625348] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.625354] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16b3d10) 00:21:27.499 [2024-04-27 00:05:57.625361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.499 [2024-04-27 00:05:57.625372] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c400, cid 7, qid 0 00:21:27.499 [2024-04-27 00:05:57.625552] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.499 [2024-04-27 00:05:57.625558] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.499 [2024-04-27 00:05:57.625562] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.625566] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171c400) on tqpair=0x16b3d10 00:21:27.499 [2024-04-27 00:05:57.625593] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:27.499 [2024-04-27 00:05:57.625604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.499 [2024-04-27 00:05:57.625611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.499 [2024-04-27 00:05:57.625617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.499 [2024-04-27 00:05:57.625623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.499 [2024-04-27 00:05:57.625631] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.625635] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.625639] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b3d10) 00:21:27.499 [2024-04-27 00:05:57.625646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.499 [2024-04-27 00:05:57.625657] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171be80, cid 3, qid 0 00:21:27.499 [2024-04-27 00:05:57.625902] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.499 [2024-04-27 00:05:57.625909] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.499 [2024-04-27 00:05:57.625913] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.625917] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171be80) on tqpair=0x16b3d10 00:21:27.499 [2024-04-27 00:05:57.625924] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.625928] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.625932] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b3d10) 00:21:27.499 [2024-04-27 00:05:57.625941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.499 [2024-04-27 00:05:57.625954] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171be80, cid 3, qid 0 00:21:27.499 [2024-04-27 00:05:57.626150] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.499 [2024-04-27 00:05:57.626157] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.499 [2024-04-27 00:05:57.626160] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.626164] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171be80) on tqpair=0x16b3d10 00:21:27.499 [2024-04-27 00:05:57.626170] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:27.499 [2024-04-27 00:05:57.626174] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:27.499 [2024-04-27 00:05:57.626183] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.626188] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.626191] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b3d10) 00:21:27.499 [2024-04-27 00:05:57.626198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.499 [2024-04-27 00:05:57.626208] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171be80, cid 3, qid 0 00:21:27.499 [2024-04-27 00:05:57.626453] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.499 [2024-04-27 00:05:57.626459] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.499 [2024-04-27 00:05:57.626463] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.626467] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171be80) on tqpair=0x16b3d10 00:21:27.499 [2024-04-27 00:05:57.626478] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.626482] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.499 [2024-04-27 00:05:57.626485] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b3d10) 00:21:27.500 [2024-04-27 00:05:57.626492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.500 [2024-04-27 00:05:57.626502] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171be80, cid 3, qid 0 00:21:27.500 [2024-04-27 00:05:57.626712] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.500 [2024-04-27 00:05:57.626718] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.500 [2024-04-27 00:05:57.626721] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.500 [2024-04-27 00:05:57.626725] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171be80) on tqpair=0x16b3d10 00:21:27.500 [2024-04-27 00:05:57.626736] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.500 [2024-04-27 00:05:57.626740] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.500 [2024-04-27 00:05:57.626743] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b3d10) 00:21:27.500 [2024-04-27 00:05:57.626750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.500 [2024-04-27 00:05:57.626760] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171be80, cid 3, qid 0 00:21:27.500 [2024-04-27 00:05:57.630845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.500 [2024-04-27 00:05:57.630854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.500 [2024-04-27 00:05:57.630857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.500 [2024-04-27 00:05:57.630861] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x171be80) on tqpair=0x16b3d10 00:21:27.500 [2024-04-27 00:05:57.630872] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:21:27.500 0 Kelvin (-273 Celsius) 00:21:27.500 Available Spare: 0% 00:21:27.500 Available Spare Threshold: 0% 00:21:27.500 Life Percentage Used: 0% 00:21:27.500 Data Units Read: 0 00:21:27.500 Data Units Written: 0 00:21:27.500 Host Read Commands: 0 00:21:27.500 Host Write Commands: 0 00:21:27.500 Controller Busy Time: 0 minutes 00:21:27.500 Power Cycles: 0 00:21:27.500 Power On Hours: 0 hours 00:21:27.500 Unsafe Shutdowns: 0 00:21:27.500 Unrecoverable Media Errors: 0 00:21:27.500 Lifetime Error Log Entries: 0 00:21:27.500 Warning Temperature Time: 0 minutes 00:21:27.500 Critical Temperature Time: 0 minutes 00:21:27.500 00:21:27.500 Number of Queues 00:21:27.500 ================ 00:21:27.500 Number of I/O Submission Queues: 127 00:21:27.500 Number of I/O Completion Queues: 127 00:21:27.500 00:21:27.500 Active Namespaces 00:21:27.500 ================= 00:21:27.500 Namespace ID:1 00:21:27.500 Error Recovery Timeout: Unlimited 00:21:27.500 Command Set Identifier: NVM (00h) 00:21:27.500 Deallocate: Supported 00:21:27.500 Deallocated/Unwritten Error: Not Supported 00:21:27.500 Deallocated Read Value: Unknown 00:21:27.500 Deallocate in Write Zeroes: Not Supported 00:21:27.500 Deallocated Guard Field: 0xFFFF 00:21:27.500 Flush: Supported 00:21:27.500 Reservation: Supported 00:21:27.500 Namespace Sharing Capabilities: Multiple Controllers 00:21:27.500 Size (in LBAs): 131072 (0GiB) 00:21:27.500 Capacity (in LBAs): 131072 (0GiB) 00:21:27.500 Utilization (in LBAs): 131072 (0GiB) 00:21:27.500 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:27.500 EUI64: ABCDEF0123456789 00:21:27.500 UUID: ad9a7973-3d96-43e4-8780-a3c339847310 00:21:27.500 Thin Provisioning: Not Supported 00:21:27.500 Per-NS Atomic Units: Yes 00:21:27.500 Atomic Boundary Size (Normal): 0 00:21:27.500 Atomic Boundary Size (PFail): 0 00:21:27.500 Atomic Boundary Offset: 0 00:21:27.500 Maximum Single Source Range Length: 65535 00:21:27.500 Maximum Copy Length: 65535 00:21:27.500 Maximum Source Range Count: 1 00:21:27.500 NGUID/EUI64 Never Reused: No 00:21:27.500 Namespace Write Protected: No 00:21:27.500 Number of LBA Formats: 1 00:21:27.500 Current LBA Format: LBA Format #00 00:21:27.500 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:27.500 00:21:27.500 00:05:57 -- host/identify.sh@51 -- # sync 00:21:27.500 00:05:57 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:27.500 00:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.500 00:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.500 00:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.500 00:05:57 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:27.500 00:05:57 -- host/identify.sh@56 -- # nvmftestfini 00:21:27.500 00:05:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:27.500 00:05:57 -- nvmf/common.sh@117 -- # sync 00:21:27.500 00:05:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.500 00:05:57 -- nvmf/common.sh@120 -- # set +e 00:21:27.500 00:05:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.500 00:05:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.500 rmmod nvme_tcp 00:21:27.500 rmmod nvme_fabrics 00:21:27.500 rmmod nvme_keyring 00:21:27.760 00:05:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.760 00:05:57 -- nvmf/common.sh@124 -- # set -e 00:21:27.760 00:05:57 -- nvmf/common.sh@125 -- # return 0 00:21:27.760 00:05:57 -- nvmf/common.sh@478 -- # '[' -n 469540 ']' 00:21:27.760 00:05:57 -- nvmf/common.sh@479 -- # killprocess 469540 00:21:27.760 00:05:57 -- common/autotest_common.sh@936 -- # '[' -z 469540 ']' 00:21:27.760 00:05:57 -- common/autotest_common.sh@940 -- # kill -0 469540 00:21:27.760 00:05:57 -- common/autotest_common.sh@941 -- # uname 00:21:27.760 00:05:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:27.760 00:05:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 469540 00:21:27.760 00:05:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:27.760 00:05:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:27.760 00:05:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 469540' 00:21:27.760 killing process with pid 469540 00:21:27.760 00:05:57 -- common/autotest_common.sh@955 -- # kill 469540 00:21:27.760 [2024-04-27 00:05:57.795430] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:27.760 00:05:57 -- common/autotest_common.sh@960 -- # wait 469540 00:21:27.760 00:05:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:27.760 00:05:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:27.760 00:05:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:27.760 00:05:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.760 00:05:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.760 00:05:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.760 00:05:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.760 00:05:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.310 00:06:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.310 00:21:30.310 real 0m11.041s 00:21:30.310 user 0m7.678s 00:21:30.310 sys 0m5.709s 00:21:30.310 00:06:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:30.310 00:06:00 -- common/autotest_common.sh@10 -- # set +x 00:21:30.310 ************************************ 00:21:30.310 END TEST nvmf_identify 00:21:30.310 ************************************ 00:21:30.310 00:06:00 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:30.310 00:06:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:30.310 00:06:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:30.310 00:06:00 -- common/autotest_common.sh@10 -- # set +x 00:21:30.310 ************************************ 00:21:30.310 START TEST nvmf_perf 00:21:30.310 ************************************ 00:21:30.310 00:06:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:30.310 * Looking for test storage... 00:21:30.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:30.310 00:06:00 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.310 00:06:00 -- nvmf/common.sh@7 -- # uname -s 00:21:30.310 00:06:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.310 00:06:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.310 00:06:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.310 00:06:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.310 00:06:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.310 00:06:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.310 00:06:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.310 00:06:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.310 00:06:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.310 00:06:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.310 00:06:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:30.310 00:06:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:30.310 00:06:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.310 00:06:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.310 00:06:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.310 00:06:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.310 00:06:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.310 00:06:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.310 00:06:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.310 00:06:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.310 00:06:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.310 00:06:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.310 00:06:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.310 00:06:00 -- paths/export.sh@5 -- # export PATH 00:21:30.310 00:06:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.310 00:06:00 -- nvmf/common.sh@47 -- # : 0 00:21:30.310 00:06:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:30.310 00:06:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:30.310 00:06:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.310 00:06:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.310 00:06:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.311 00:06:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:30.311 00:06:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:30.311 00:06:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:30.311 00:06:00 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:30.311 00:06:00 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:30.311 00:06:00 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:30.311 00:06:00 -- host/perf.sh@17 -- # nvmftestinit 00:21:30.311 00:06:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:30.311 00:06:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.311 00:06:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:30.311 00:06:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:30.311 00:06:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:30.311 00:06:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.311 00:06:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.311 00:06:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.311 00:06:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:30.311 00:06:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:30.311 00:06:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.311 00:06:00 -- common/autotest_common.sh@10 -- # set +x 00:21:38.448 00:06:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:38.448 00:06:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:38.448 00:06:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:38.448 00:06:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:38.448 00:06:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:38.448 00:06:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:38.448 00:06:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:38.448 00:06:07 -- nvmf/common.sh@295 -- # net_devs=() 00:21:38.448 00:06:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:38.448 00:06:07 -- nvmf/common.sh@296 -- # e810=() 00:21:38.448 00:06:07 -- nvmf/common.sh@296 -- # local -ga e810 00:21:38.448 00:06:07 -- nvmf/common.sh@297 -- # x722=() 00:21:38.448 00:06:07 -- nvmf/common.sh@297 -- # local -ga x722 00:21:38.448 00:06:07 -- nvmf/common.sh@298 -- # mlx=() 00:21:38.448 00:06:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:38.448 00:06:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.448 00:06:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.448 00:06:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.448 00:06:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.448 00:06:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.448 00:06:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.448 00:06:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.448 00:06:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.448 00:06:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.448 00:06:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.448 00:06:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.448 00:06:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:38.448 00:06:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:38.448 00:06:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:38.448 00:06:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.448 00:06:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:38.448 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:38.448 00:06:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.448 00:06:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:38.448 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:38.448 00:06:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:38.448 00:06:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.448 00:06:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.448 00:06:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:38.448 00:06:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.448 00:06:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:38.448 Found net devices under 0000:31:00.0: cvl_0_0 00:21:38.448 00:06:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.448 00:06:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.448 00:06:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.448 00:06:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:38.448 00:06:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.448 00:06:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:38.448 Found net devices under 0000:31:00.1: cvl_0_1 00:21:38.448 00:06:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.448 00:06:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:38.448 00:06:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:38.448 00:06:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:38.448 00:06:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:38.449 00:06:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:38.449 00:06:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.449 00:06:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.449 00:06:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.449 00:06:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:38.449 00:06:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.449 00:06:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.449 00:06:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:38.449 00:06:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.449 00:06:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.449 00:06:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:38.449 00:06:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:38.449 00:06:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.449 00:06:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.449 00:06:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.449 00:06:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.449 00:06:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:38.449 00:06:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.449 00:06:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.449 00:06:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.449 00:06:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:38.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:21:38.449 00:21:38.449 --- 10.0.0.2 ping statistics --- 00:21:38.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.449 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:21:38.449 00:06:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:21:38.449 00:21:38.449 --- 10.0.0.1 ping statistics --- 00:21:38.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.449 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:21:38.449 00:06:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.449 00:06:07 -- nvmf/common.sh@411 -- # return 0 00:21:38.449 00:06:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:38.449 00:06:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.449 00:06:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:38.449 00:06:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:38.449 00:06:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.449 00:06:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:38.449 00:06:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:38.449 00:06:07 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:38.449 00:06:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:38.449 00:06:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:38.449 00:06:07 -- common/autotest_common.sh@10 -- # set +x 00:21:38.449 00:06:07 -- nvmf/common.sh@470 -- # nvmfpid=474285 00:21:38.449 00:06:07 -- nvmf/common.sh@471 -- # waitforlisten 474285 00:21:38.449 00:06:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:38.449 00:06:07 -- common/autotest_common.sh@817 -- # '[' -z 474285 ']' 00:21:38.449 00:06:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.449 00:06:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:38.449 00:06:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.449 00:06:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:38.449 00:06:07 -- common/autotest_common.sh@10 -- # set +x 00:21:38.449 [2024-04-27 00:06:07.616969] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:21:38.449 [2024-04-27 00:06:07.617039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.449 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.449 [2024-04-27 00:06:07.689170] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.449 [2024-04-27 00:06:07.763442] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.449 [2024-04-27 00:06:07.763483] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.449 [2024-04-27 00:06:07.763491] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.449 [2024-04-27 00:06:07.763497] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.449 [2024-04-27 00:06:07.763503] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.449 [2024-04-27 00:06:07.763615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.449 [2024-04-27 00:06:07.763733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.449 [2024-04-27 00:06:07.763894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.449 [2024-04-27 00:06:07.763895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.449 00:06:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:38.449 00:06:08 -- common/autotest_common.sh@850 -- # return 0 00:21:38.449 00:06:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:38.449 00:06:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:38.449 00:06:08 -- common/autotest_common.sh@10 -- # set +x 00:21:38.449 00:06:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.449 00:06:08 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:38.449 00:06:08 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:38.709 00:06:08 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:38.709 00:06:08 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:38.969 00:06:09 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:21:38.969 00:06:09 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:39.229 00:06:09 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:39.229 00:06:09 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:21:39.229 00:06:09 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:39.229 00:06:09 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:39.229 00:06:09 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:39.229 [2024-04-27 00:06:09.388194] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.229 00:06:09 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:39.489 00:06:09 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:39.489 00:06:09 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:39.750 00:06:09 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:39.750 00:06:09 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:39.750 00:06:09 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.010 [2024-04-27 00:06:10.070692] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.010 00:06:10 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:40.270 00:06:10 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:21:40.270 00:06:10 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:40.270 00:06:10 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:40.270 00:06:10 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:41.651 Initializing NVMe Controllers 00:21:41.651 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:21:41.651 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:21:41.651 Initialization complete. Launching workers. 00:21:41.651 ======================================================== 00:21:41.651 Latency(us) 00:21:41.651 Device Information : IOPS MiB/s Average min max 00:21:41.651 PCIE (0000:65:00.0) NSID 1 from core 0: 80237.38 313.43 398.42 13.39 7196.54 00:21:41.651 ======================================================== 00:21:41.651 Total : 80237.38 313.43 398.42 13.39 7196.54 00:21:41.651 00:21:41.651 00:06:11 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:41.651 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.036 Initializing NVMe Controllers 00:21:43.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:43.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:43.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:43.036 Initialization complete. Launching workers. 00:21:43.036 ======================================================== 00:21:43.036 Latency(us) 00:21:43.036 Device Information : IOPS MiB/s Average min max 00:21:43.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.00 0.39 10285.51 338.07 45932.07 00:21:43.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.00 0.23 17275.49 7948.36 47920.00 00:21:43.036 ======================================================== 00:21:43.036 Total : 160.00 0.62 12906.75 338.07 47920.00 00:21:43.036 00:21:43.036 00:06:12 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:43.036 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.420 Initializing NVMe Controllers 00:21:44.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:44.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:44.420 Initialization complete. Launching workers. 00:21:44.420 ======================================================== 00:21:44.420 Latency(us) 00:21:44.420 Device Information : IOPS MiB/s Average min max 00:21:44.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11658.98 45.54 2744.67 436.58 9873.46 00:21:44.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3743.99 14.62 8593.02 4821.86 18066.59 00:21:44.420 ======================================================== 00:21:44.420 Total : 15402.97 60.17 4166.23 436.58 18066.59 00:21:44.420 00:21:44.420 00:06:14 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:44.420 00:06:14 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:44.420 00:06:14 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:44.420 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.985 Initializing NVMe Controllers 00:21:46.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:46.985 Controller IO queue size 128, less than required. 00:21:46.985 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:46.985 Controller IO queue size 128, less than required. 00:21:46.985 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:46.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:46.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:46.985 Initialization complete. Launching workers. 00:21:46.985 ======================================================== 00:21:46.985 Latency(us) 00:21:46.985 Device Information : IOPS MiB/s Average min max 00:21:46.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1264.58 316.14 103899.69 54366.14 145792.17 00:21:46.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 604.80 151.20 224968.08 62523.99 367194.08 00:21:46.985 ======================================================== 00:21:46.985 Total : 1869.38 467.34 143068.88 54366.14 367194.08 00:21:46.985 00:21:46.985 00:06:17 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:46.985 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.246 No valid NVMe controllers or AIO or URING devices found 00:21:47.246 Initializing NVMe Controllers 00:21:47.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:47.246 Controller IO queue size 128, less than required. 00:21:47.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.246 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:47.246 Controller IO queue size 128, less than required. 00:21:47.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.246 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:47.246 WARNING: Some requested NVMe devices were skipped 00:21:47.246 00:06:17 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:47.246 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.793 Initializing NVMe Controllers 00:21:49.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.793 Controller IO queue size 128, less than required. 00:21:49.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.793 Controller IO queue size 128, less than required. 00:21:49.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:49.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:49.793 Initialization complete. Launching workers. 00:21:49.793 00:21:49.793 ==================== 00:21:49.793 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:49.793 TCP transport: 00:21:49.793 polls: 25038 00:21:49.793 idle_polls: 10853 00:21:49.793 sock_completions: 14185 00:21:49.793 nvme_completions: 4535 00:21:49.793 submitted_requests: 6806 00:21:49.793 queued_requests: 1 00:21:49.793 00:21:49.793 ==================== 00:21:49.793 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:49.793 TCP transport: 00:21:49.793 polls: 24673 00:21:49.793 idle_polls: 13341 00:21:49.793 sock_completions: 11332 00:21:49.793 nvme_completions: 7781 00:21:49.793 submitted_requests: 11726 00:21:49.793 queued_requests: 1 00:21:49.793 ======================================================== 00:21:49.793 Latency(us) 00:21:49.793 Device Information : IOPS MiB/s Average min max 00:21:49.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1133.40 283.35 115739.08 77046.94 164320.06 00:21:49.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1944.84 486.21 66878.58 38341.11 102408.65 00:21:49.793 ======================================================== 00:21:49.793 Total : 3078.24 769.56 84868.96 38341.11 164320.06 00:21:49.793 00:21:49.793 00:06:19 -- host/perf.sh@66 -- # sync 00:21:49.793 00:06:19 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:49.793 00:06:19 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:49.793 00:06:19 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:49.793 00:06:19 -- host/perf.sh@114 -- # nvmftestfini 00:21:49.793 00:06:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:49.793 00:06:19 -- nvmf/common.sh@117 -- # sync 00:21:49.793 00:06:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:49.793 00:06:19 -- nvmf/common.sh@120 -- # set +e 00:21:49.793 00:06:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:49.793 00:06:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:49.793 rmmod nvme_tcp 00:21:49.793 rmmod nvme_fabrics 00:21:49.793 rmmod nvme_keyring 00:21:49.793 00:06:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:49.793 00:06:19 -- nvmf/common.sh@124 -- # set -e 00:21:49.793 00:06:19 -- nvmf/common.sh@125 -- # return 0 00:21:49.793 00:06:19 -- nvmf/common.sh@478 -- # '[' -n 474285 ']' 00:21:49.793 00:06:19 -- nvmf/common.sh@479 -- # killprocess 474285 00:21:49.793 00:06:19 -- common/autotest_common.sh@936 -- # '[' -z 474285 ']' 00:21:49.793 00:06:19 -- common/autotest_common.sh@940 -- # kill -0 474285 00:21:49.793 00:06:19 -- common/autotest_common.sh@941 -- # uname 00:21:49.793 00:06:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:49.793 00:06:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 474285 00:21:49.793 00:06:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:49.793 00:06:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:49.793 00:06:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 474285' 00:21:49.793 killing process with pid 474285 00:21:49.793 00:06:20 -- common/autotest_common.sh@955 -- # kill 474285 00:21:49.793 00:06:20 -- common/autotest_common.sh@960 -- # wait 474285 00:21:52.338 00:06:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:52.338 00:06:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:52.338 00:06:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:52.338 00:06:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.338 00:06:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:52.338 00:06:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.338 00:06:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.338 00:06:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.276 00:06:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.276 00:21:54.276 real 0m23.848s 00:21:54.276 user 0m57.743s 00:21:54.276 sys 0m8.077s 00:21:54.276 00:06:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:54.276 00:06:24 -- common/autotest_common.sh@10 -- # set +x 00:21:54.276 ************************************ 00:21:54.276 END TEST nvmf_perf 00:21:54.276 ************************************ 00:21:54.276 00:06:24 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:54.276 00:06:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:54.276 00:06:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:54.276 00:06:24 -- common/autotest_common.sh@10 -- # set +x 00:21:54.276 ************************************ 00:21:54.276 START TEST nvmf_fio_host 00:21:54.276 ************************************ 00:21:54.276 00:06:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:54.276 * Looking for test storage... 00:21:54.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.276 00:06:24 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.276 00:06:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.276 00:06:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.276 00:06:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.276 00:06:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.276 00:06:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.276 00:06:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.276 00:06:24 -- paths/export.sh@5 -- # export PATH 00:21:54.276 00:06:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.276 00:06:24 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.276 00:06:24 -- nvmf/common.sh@7 -- # uname -s 00:21:54.276 00:06:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.276 00:06:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.276 00:06:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.276 00:06:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.276 00:06:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.276 00:06:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.276 00:06:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.276 00:06:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.276 00:06:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.276 00:06:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.276 00:06:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:54.276 00:06:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:54.276 00:06:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.276 00:06:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.276 00:06:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.276 00:06:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.276 00:06:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.276 00:06:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.276 00:06:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.276 00:06:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.276 00:06:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.276 00:06:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.277 00:06:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.277 00:06:24 -- paths/export.sh@5 -- # export PATH 00:21:54.277 00:06:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.277 00:06:24 -- nvmf/common.sh@47 -- # : 0 00:21:54.277 00:06:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:54.277 00:06:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:54.277 00:06:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.277 00:06:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.277 00:06:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.277 00:06:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:54.277 00:06:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:54.277 00:06:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:54.277 00:06:24 -- host/fio.sh@12 -- # nvmftestinit 00:21:54.277 00:06:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:54.277 00:06:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.277 00:06:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:54.277 00:06:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:54.277 00:06:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:54.277 00:06:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.277 00:06:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.277 00:06:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.277 00:06:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:54.277 00:06:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:54.277 00:06:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.277 00:06:24 -- common/autotest_common.sh@10 -- # set +x 00:22:02.428 00:06:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:02.428 00:06:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.428 00:06:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.428 00:06:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.428 00:06:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.428 00:06:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.428 00:06:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.428 00:06:31 -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.428 00:06:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.428 00:06:31 -- nvmf/common.sh@296 -- # e810=() 00:22:02.428 00:06:31 -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.428 00:06:31 -- nvmf/common.sh@297 -- # x722=() 00:22:02.428 00:06:31 -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.428 00:06:31 -- nvmf/common.sh@298 -- # mlx=() 00:22:02.428 00:06:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.428 00:06:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.428 00:06:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.428 00:06:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.428 00:06:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.428 00:06:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.428 00:06:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.428 00:06:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.428 00:06:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.428 00:06:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.428 00:06:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.428 00:06:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.428 00:06:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.428 00:06:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.428 00:06:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.428 00:06:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.428 00:06:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.428 00:06:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.428 00:06:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.428 00:06:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:02.428 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:02.428 00:06:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.428 00:06:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.428 00:06:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.428 00:06:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.428 00:06:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.428 00:06:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.429 00:06:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:02.429 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:02.429 00:06:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.429 00:06:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.429 00:06:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.429 00:06:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.429 00:06:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.429 00:06:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.429 00:06:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.429 00:06:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.429 00:06:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.429 00:06:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.429 00:06:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:02.429 00:06:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.429 00:06:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:02.429 Found net devices under 0000:31:00.0: cvl_0_0 00:22:02.429 00:06:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.429 00:06:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.429 00:06:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.429 00:06:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:02.429 00:06:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.429 00:06:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:02.429 Found net devices under 0000:31:00.1: cvl_0_1 00:22:02.429 00:06:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.429 00:06:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:02.429 00:06:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:02.429 00:06:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:02.429 00:06:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:02.429 00:06:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:02.429 00:06:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.429 00:06:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.429 00:06:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.429 00:06:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:02.429 00:06:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.429 00:06:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.429 00:06:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:02.429 00:06:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.429 00:06:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.429 00:06:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:02.429 00:06:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:02.429 00:06:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.429 00:06:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.429 00:06:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.429 00:06:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.429 00:06:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:02.429 00:06:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.429 00:06:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.429 00:06:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.429 00:06:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:02.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:22:02.429 00:22:02.429 --- 10.0.0.2 ping statistics --- 00:22:02.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.429 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:22:02.429 00:06:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:22:02.429 00:22:02.429 --- 10.0.0.1 ping statistics --- 00:22:02.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.429 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:22:02.429 00:06:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.429 00:06:31 -- nvmf/common.sh@411 -- # return 0 00:22:02.429 00:06:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:02.429 00:06:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.429 00:06:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:02.429 00:06:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:02.429 00:06:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.429 00:06:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:02.429 00:06:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:02.429 00:06:31 -- host/fio.sh@14 -- # [[ y != y ]] 00:22:02.429 00:06:31 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:02.429 00:06:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:02.429 00:06:31 -- common/autotest_common.sh@10 -- # set +x 00:22:02.429 00:06:31 -- host/fio.sh@22 -- # nvmfpid=481826 00:22:02.429 00:06:31 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:02.429 00:06:31 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:02.429 00:06:31 -- host/fio.sh@26 -- # waitforlisten 481826 00:22:02.429 00:06:31 -- common/autotest_common.sh@817 -- # '[' -z 481826 ']' 00:22:02.429 00:06:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.429 00:06:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:02.429 00:06:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.429 00:06:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:02.429 00:06:31 -- common/autotest_common.sh@10 -- # set +x 00:22:02.429 [2024-04-27 00:06:31.845641] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:22:02.429 [2024-04-27 00:06:31.845700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.429 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.429 [2024-04-27 00:06:31.914353] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.429 [2024-04-27 00:06:31.981194] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.429 [2024-04-27 00:06:31.981231] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.429 [2024-04-27 00:06:31.981238] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.429 [2024-04-27 00:06:31.981245] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.429 [2024-04-27 00:06:31.981251] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.429 [2024-04-27 00:06:31.981359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.429 [2024-04-27 00:06:31.981477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.429 [2024-04-27 00:06:31.981633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.429 [2024-04-27 00:06:31.981633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.429 00:06:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:02.429 00:06:32 -- common/autotest_common.sh@850 -- # return 0 00:22:02.429 00:06:32 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.429 00:06:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.429 00:06:32 -- common/autotest_common.sh@10 -- # set +x 00:22:02.429 [2024-04-27 00:06:32.627341] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.429 00:06:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.429 00:06:32 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:02.429 00:06:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:02.429 00:06:32 -- common/autotest_common.sh@10 -- # set +x 00:22:02.694 00:06:32 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:02.694 00:06:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.694 00:06:32 -- common/autotest_common.sh@10 -- # set +x 00:22:02.694 Malloc1 00:22:02.694 00:06:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.694 00:06:32 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:02.694 00:06:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.694 00:06:32 -- common/autotest_common.sh@10 -- # set +x 00:22:02.694 00:06:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.694 00:06:32 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:02.694 00:06:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.694 00:06:32 -- common/autotest_common.sh@10 -- # set +x 00:22:02.694 00:06:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.694 00:06:32 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.694 00:06:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.694 00:06:32 -- common/autotest_common.sh@10 -- # set +x 00:22:02.694 [2024-04-27 00:06:32.722889] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.694 00:06:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.694 00:06:32 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:02.694 00:06:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.694 00:06:32 -- common/autotest_common.sh@10 -- # set +x 00:22:02.694 00:06:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.694 00:06:32 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:02.694 00:06:32 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:02.694 00:06:32 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:02.694 00:06:32 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:02.694 00:06:32 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:02.694 00:06:32 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:02.694 00:06:32 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:02.694 00:06:32 -- common/autotest_common.sh@1327 -- # shift 00:22:02.694 00:06:32 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:02.694 00:06:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:02.694 00:06:32 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:02.694 00:06:32 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:02.694 00:06:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:02.694 00:06:32 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:02.694 00:06:32 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:02.694 00:06:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:02.694 00:06:32 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:02.694 00:06:32 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:02.694 00:06:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:02.694 00:06:32 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:02.694 00:06:32 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:02.694 00:06:32 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:02.694 00:06:32 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:03.026 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:03.026 fio-3.35 00:22:03.026 Starting 1 thread 00:22:03.026 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.608 00:22:05.608 test: (groupid=0, jobs=1): err= 0: pid=482235: Sat Apr 27 00:06:35 2024 00:22:05.608 read: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(102MiB/2004msec) 00:22:05.608 slat (usec): min=2, max=271, avg= 2.16, stdev= 2.40 00:22:05.608 clat (usec): min=3653, max=8763, avg=5418.60, stdev=989.85 00:22:05.608 lat (usec): min=3655, max=8766, avg=5420.77, stdev=989.87 00:22:05.608 clat percentiles (usec): 00:22:05.608 | 1.00th=[ 4178], 5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 4686], 00:22:05.608 | 30.00th=[ 4817], 40.00th=[ 4948], 50.00th=[ 5014], 60.00th=[ 5145], 00:22:05.608 | 70.00th=[ 5342], 80.00th=[ 6587], 90.00th=[ 7177], 95.00th=[ 7504], 00:22:05.608 | 99.00th=[ 7898], 99.50th=[ 8029], 99.90th=[ 8291], 99.95th=[ 8356], 00:22:05.608 | 99.99th=[ 8717] 00:22:05.608 bw ( KiB/s): min=38491, max=57544, per=99.90%, avg=51882.75, stdev=9074.67, samples=4 00:22:05.608 iops : min= 9622, max=14386, avg=12970.50, stdev=2269.04, samples=4 00:22:05.608 write: IOPS=13.0k, BW=50.7MiB/s (53.1MB/s)(102MiB/2004msec); 0 zone resets 00:22:05.608 slat (usec): min=2, max=277, avg= 2.26, stdev= 1.88 00:22:05.608 clat (usec): min=2844, max=7374, avg=4370.68, stdev=788.03 00:22:05.608 lat (usec): min=2847, max=7376, avg=4372.94, stdev=788.08 00:22:05.608 clat percentiles (usec): 00:22:05.608 | 1.00th=[ 3359], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3818], 00:22:05.608 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4178], 00:22:05.608 | 70.00th=[ 4293], 80.00th=[ 5276], 90.00th=[ 5735], 95.00th=[ 5997], 00:22:05.608 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[ 6915], 00:22:05.608 | 99.99th=[ 7177] 00:22:05.608 bw ( KiB/s): min=39033, max=57784, per=99.93%, avg=51854.25, stdev=8769.19, samples=4 00:22:05.608 iops : min= 9758, max=14446, avg=12963.50, stdev=2192.42, samples=4 00:22:05.608 lat (msec) : 4=20.95%, 10=79.05% 00:22:05.608 cpu : usr=70.29%, sys=27.01%, ctx=20, majf=0, minf=5 00:22:05.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:05.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:05.608 issued rwts: total=26019,25996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:05.608 00:22:05.608 Run status group 0 (all jobs): 00:22:05.608 READ: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=102MiB (107MB), run=2004-2004msec 00:22:05.608 WRITE: bw=50.7MiB/s (53.1MB/s), 50.7MiB/s-50.7MiB/s (53.1MB/s-53.1MB/s), io=102MiB (106MB), run=2004-2004msec 00:22:05.608 00:06:35 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:05.608 00:06:35 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:05.608 00:06:35 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:05.608 00:06:35 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:05.608 00:06:35 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:05.608 00:06:35 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:05.608 00:06:35 -- common/autotest_common.sh@1327 -- # shift 00:22:05.608 00:06:35 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:05.608 00:06:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.608 00:06:35 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:05.608 00:06:35 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:05.608 00:06:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:05.608 00:06:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:05.608 00:06:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:05.608 00:06:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.608 00:06:35 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:05.608 00:06:35 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:05.608 00:06:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:05.608 00:06:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:05.608 00:06:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:05.608 00:06:35 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:05.608 00:06:35 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:05.608 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:05.608 fio-3.35 00:22:05.608 Starting 1 thread 00:22:05.608 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.160 00:22:08.160 test: (groupid=0, jobs=1): err= 0: pid=483004: Sat Apr 27 00:06:38 2024 00:22:08.160 read: IOPS=9282, BW=145MiB/s (152MB/s)(291MiB/2003msec) 00:22:08.160 slat (usec): min=3, max=107, avg= 3.62, stdev= 1.62 00:22:08.160 clat (usec): min=2095, max=52975, avg=8613.87, stdev=3873.33 00:22:08.160 lat (usec): min=2099, max=52979, avg=8617.49, stdev=3873.42 00:22:08.160 clat percentiles (usec): 00:22:08.160 | 1.00th=[ 4293], 5.00th=[ 5211], 10.00th=[ 5800], 20.00th=[ 6521], 00:22:08.160 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 8225], 60.00th=[ 8848], 00:22:08.160 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[10945], 95.00th=[11600], 00:22:08.160 | 99.00th=[14091], 99.50th=[46924], 99.90th=[52167], 99.95th=[52691], 00:22:08.160 | 99.99th=[52691] 00:22:08.160 bw ( KiB/s): min=58208, max=92832, per=49.00%, avg=72776.00, stdev=15030.91, samples=4 00:22:08.160 iops : min= 3638, max= 5802, avg=4548.50, stdev=939.43, samples=4 00:22:08.160 write: IOPS=5765, BW=90.1MiB/s (94.5MB/s)(148MiB/1647msec); 0 zone resets 00:22:08.160 slat (usec): min=40, max=449, avg=41.27, stdev= 9.05 00:22:08.160 clat (usec): min=2066, max=17399, avg=9262.57, stdev=1589.45 00:22:08.160 lat (usec): min=2106, max=17531, avg=9303.83, stdev=1592.30 00:22:08.160 clat percentiles (usec): 00:22:08.160 | 1.00th=[ 6390], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 8029], 00:22:08.160 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:22:08.160 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11207], 95.00th=[12125], 00:22:08.160 | 99.00th=[13960], 99.50th=[14615], 99.90th=[16909], 99.95th=[17171], 00:22:08.160 | 99.99th=[17433] 00:22:08.160 bw ( KiB/s): min=60928, max=95680, per=82.35%, avg=75960.00, stdev=14870.89, samples=4 00:22:08.160 iops : min= 3808, max= 5980, avg=4747.50, stdev=929.43, samples=4 00:22:08.160 lat (msec) : 4=0.47%, 10=73.52%, 20=25.56%, 50=0.26%, 100=0.19% 00:22:08.160 cpu : usr=85.86%, sys=12.59%, ctx=16, majf=0, minf=20 00:22:08.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:08.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:08.160 issued rwts: total=18592,9495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:08.160 00:22:08.160 Run status group 0 (all jobs): 00:22:08.160 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=291MiB (305MB), run=2003-2003msec 00:22:08.160 WRITE: bw=90.1MiB/s (94.5MB/s), 90.1MiB/s-90.1MiB/s (94.5MB/s-94.5MB/s), io=148MiB (156MB), run=1647-1647msec 00:22:08.160 00:06:38 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.160 00:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.160 00:06:38 -- common/autotest_common.sh@10 -- # set +x 00:22:08.160 00:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.160 00:06:38 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:08.160 00:06:38 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:08.160 00:06:38 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:08.160 00:06:38 -- host/fio.sh@84 -- # nvmftestfini 00:22:08.160 00:06:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:08.160 00:06:38 -- nvmf/common.sh@117 -- # sync 00:22:08.160 00:06:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.160 00:06:38 -- nvmf/common.sh@120 -- # set +e 00:22:08.160 00:06:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.160 00:06:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.160 rmmod nvme_tcp 00:22:08.160 rmmod nvme_fabrics 00:22:08.160 rmmod nvme_keyring 00:22:08.160 00:06:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.160 00:06:38 -- nvmf/common.sh@124 -- # set -e 00:22:08.160 00:06:38 -- nvmf/common.sh@125 -- # return 0 00:22:08.160 00:06:38 -- nvmf/common.sh@478 -- # '[' -n 481826 ']' 00:22:08.160 00:06:38 -- nvmf/common.sh@479 -- # killprocess 481826 00:22:08.160 00:06:38 -- common/autotest_common.sh@936 -- # '[' -z 481826 ']' 00:22:08.160 00:06:38 -- common/autotest_common.sh@940 -- # kill -0 481826 00:22:08.160 00:06:38 -- common/autotest_common.sh@941 -- # uname 00:22:08.160 00:06:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:08.160 00:06:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 481826 00:22:08.160 00:06:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:08.160 00:06:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:08.160 00:06:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 481826' 00:22:08.160 killing process with pid 481826 00:22:08.160 00:06:38 -- common/autotest_common.sh@955 -- # kill 481826 00:22:08.160 00:06:38 -- common/autotest_common.sh@960 -- # wait 481826 00:22:08.160 00:06:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:08.160 00:06:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:08.160 00:06:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:08.160 00:06:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.160 00:06:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:08.160 00:06:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.160 00:06:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.160 00:06:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.707 00:06:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:10.707 00:22:10.707 real 0m16.210s 00:22:10.707 user 0m59.939s 00:22:10.707 sys 0m7.142s 00:22:10.707 00:06:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:10.707 00:06:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.707 ************************************ 00:22:10.707 END TEST nvmf_fio_host 00:22:10.707 ************************************ 00:22:10.707 00:06:40 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:10.707 00:06:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:10.707 00:06:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:10.707 00:06:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.707 ************************************ 00:22:10.707 START TEST nvmf_failover 00:22:10.707 ************************************ 00:22:10.707 00:06:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:10.707 * Looking for test storage... 00:22:10.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:10.707 00:06:40 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.707 00:06:40 -- nvmf/common.sh@7 -- # uname -s 00:22:10.707 00:06:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.707 00:06:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.707 00:06:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.707 00:06:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.707 00:06:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.707 00:06:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.707 00:06:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.707 00:06:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.707 00:06:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.707 00:06:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.707 00:06:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.707 00:06:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.707 00:06:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.707 00:06:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.707 00:06:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.707 00:06:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.707 00:06:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.707 00:06:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.707 00:06:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.707 00:06:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.707 00:06:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.707 00:06:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.707 00:06:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.707 00:06:40 -- paths/export.sh@5 -- # export PATH 00:22:10.708 00:06:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.708 00:06:40 -- nvmf/common.sh@47 -- # : 0 00:22:10.708 00:06:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:10.708 00:06:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:10.708 00:06:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.708 00:06:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.708 00:06:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.708 00:06:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:10.708 00:06:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:10.708 00:06:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:10.708 00:06:40 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:10.708 00:06:40 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:10.708 00:06:40 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:10.708 00:06:40 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:10.708 00:06:40 -- host/failover.sh@18 -- # nvmftestinit 00:22:10.708 00:06:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:10.708 00:06:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.708 00:06:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:10.708 00:06:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:10.708 00:06:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:10.708 00:06:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.708 00:06:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.708 00:06:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.708 00:06:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:10.708 00:06:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:10.708 00:06:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:10.708 00:06:40 -- common/autotest_common.sh@10 -- # set +x 00:22:17.294 00:06:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:17.294 00:06:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.294 00:06:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.294 00:06:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.294 00:06:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.294 00:06:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.294 00:06:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.294 00:06:47 -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.294 00:06:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.294 00:06:47 -- nvmf/common.sh@296 -- # e810=() 00:22:17.294 00:06:47 -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.294 00:06:47 -- nvmf/common.sh@297 -- # x722=() 00:22:17.294 00:06:47 -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.294 00:06:47 -- nvmf/common.sh@298 -- # mlx=() 00:22:17.294 00:06:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.294 00:06:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.294 00:06:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.294 00:06:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.294 00:06:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.295 00:06:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.295 00:06:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.295 00:06:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.295 00:06:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.295 00:06:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.295 00:06:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.295 00:06:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.295 00:06:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.295 00:06:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:17.295 00:06:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.295 00:06:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.295 00:06:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:17.295 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:17.295 00:06:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.295 00:06:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:17.295 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:17.295 00:06:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.295 00:06:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.295 00:06:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.295 00:06:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:17.295 00:06:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.295 00:06:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:17.295 Found net devices under 0000:31:00.0: cvl_0_0 00:22:17.295 00:06:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.295 00:06:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.295 00:06:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.295 00:06:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:17.295 00:06:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.295 00:06:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:17.295 Found net devices under 0000:31:00.1: cvl_0_1 00:22:17.295 00:06:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.295 00:06:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:17.295 00:06:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:17.295 00:06:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:17.295 00:06:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.295 00:06:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.295 00:06:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.295 00:06:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.295 00:06:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.295 00:06:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.295 00:06:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.295 00:06:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.295 00:06:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.295 00:06:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.295 00:06:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.295 00:06:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.295 00:06:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.295 00:06:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.295 00:06:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.295 00:06:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.295 00:06:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.295 00:06:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.295 00:06:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.295 00:06:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:22:17.295 00:22:17.295 --- 10.0.0.2 ping statistics --- 00:22:17.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.295 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:22:17.295 00:06:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:22:17.295 00:22:17.295 --- 10.0.0.1 ping statistics --- 00:22:17.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.295 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:22:17.295 00:06:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.295 00:06:47 -- nvmf/common.sh@411 -- # return 0 00:22:17.295 00:06:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:17.295 00:06:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.295 00:06:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:17.295 00:06:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.295 00:06:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:17.295 00:06:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:17.295 00:06:47 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:17.295 00:06:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:17.295 00:06:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:17.295 00:06:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.295 00:06:47 -- nvmf/common.sh@470 -- # nvmfpid=487480 00:22:17.295 00:06:47 -- nvmf/common.sh@471 -- # waitforlisten 487480 00:22:17.295 00:06:47 -- common/autotest_common.sh@817 -- # '[' -z 487480 ']' 00:22:17.295 00:06:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.295 00:06:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:17.295 00:06:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.295 00:06:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:17.295 00:06:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.295 00:06:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:17.295 [2024-04-27 00:06:47.445679] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:22:17.295 [2024-04-27 00:06:47.445762] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.295 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.295 [2024-04-27 00:06:47.511929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:17.556 [2024-04-27 00:06:47.577007] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.556 [2024-04-27 00:06:47.577045] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.556 [2024-04-27 00:06:47.577053] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.556 [2024-04-27 00:06:47.577059] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.556 [2024-04-27 00:06:47.577065] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.556 [2024-04-27 00:06:47.577172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.556 [2024-04-27 00:06:47.577328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.556 [2024-04-27 00:06:47.577328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.128 00:06:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:18.128 00:06:48 -- common/autotest_common.sh@850 -- # return 0 00:22:18.128 00:06:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:18.128 00:06:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:18.128 00:06:48 -- common/autotest_common.sh@10 -- # set +x 00:22:18.128 00:06:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.128 00:06:48 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:18.389 [2024-04-27 00:06:48.368894] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.389 00:06:48 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:18.389 Malloc0 00:22:18.389 00:06:48 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:18.651 00:06:48 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.912 00:06:48 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.912 [2024-04-27 00:06:49.049264] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.912 00:06:49 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:19.173 [2024-04-27 00:06:49.217709] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:19.174 00:06:49 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:19.174 [2024-04-27 00:06:49.386243] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:19.435 00:06:49 -- host/failover.sh@31 -- # bdevperf_pid=487991 00:22:19.435 00:06:49 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:19.435 00:06:49 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.435 00:06:49 -- host/failover.sh@34 -- # waitforlisten 487991 /var/tmp/bdevperf.sock 00:22:19.435 00:06:49 -- common/autotest_common.sh@817 -- # '[' -z 487991 ']' 00:22:19.435 00:06:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.435 00:06:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:19.435 00:06:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.435 00:06:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:19.435 00:06:49 -- common/autotest_common.sh@10 -- # set +x 00:22:20.379 00:06:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:20.379 00:06:50 -- common/autotest_common.sh@850 -- # return 0 00:22:20.379 00:06:50 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.640 NVMe0n1 00:22:20.640 00:06:50 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.902 00:22:20.902 00:06:51 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:20.902 00:06:51 -- host/failover.sh@39 -- # run_test_pid=488274 00:22:20.902 00:06:51 -- host/failover.sh@41 -- # sleep 1 00:22:21.845 00:06:52 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.107 [2024-04-27 00:06:52.181901] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181940] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181946] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181951] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181955] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181960] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181964] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181969] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181973] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181977] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181982] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181986] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181990] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181995] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.181999] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182003] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182008] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182012] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182021] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182026] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182030] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182035] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182040] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182044] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182049] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182053] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182057] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182062] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182066] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182071] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182076] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182080] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182085] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182089] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182094] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182098] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182102] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182107] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.107 [2024-04-27 00:06:52.182111] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182115] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182120] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182124] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182129] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182133] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182137] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182143] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182148] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182152] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182156] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182161] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182165] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182170] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182175] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182179] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182184] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182188] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182193] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182197] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182202] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182206] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182210] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182215] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182219] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182223] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182228] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182232] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182237] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182242] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182246] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182251] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182256] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182260] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182264] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182270] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182274] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182279] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182284] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182288] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182293] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182298] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182303] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182307] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182312] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182317] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182321] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182326] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182331] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182335] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182340] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182344] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182349] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182354] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182358] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182362] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182366] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182371] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182375] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182380] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182384] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182388] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182394] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182399] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182403] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182408] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182412] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182416] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182421] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182425] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182430] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182435] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182439] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.108 [2024-04-27 00:06:52.182443] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.109 [2024-04-27 00:06:52.182447] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.109 [2024-04-27 00:06:52.182452] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88cb0 is same with the state(5) to be set 00:22:22.109 00:06:52 -- host/failover.sh@45 -- # sleep 3 00:22:25.413 00:06:55 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.413 00:22:25.413 00:06:55 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:25.675 [2024-04-27 00:06:55.732355] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732398] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732406] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732413] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732419] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732426] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732432] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732439] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732445] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732451] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732464] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732470] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732477] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732483] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732489] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732496] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732502] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732509] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732516] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732522] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732529] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732535] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732542] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732548] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732555] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732561] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732568] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732574] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732581] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732587] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732594] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732601] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732607] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732614] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732621] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732628] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732634] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732641] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732651] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732657] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732664] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732671] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732677] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732683] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732689] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732696] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732702] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.675 [2024-04-27 00:06:55.732709] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb89b60 is same with the state(5) to be set 00:22:25.676 00:06:55 -- host/failover.sh@50 -- # sleep 3 00:22:28.978 00:06:58 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.978 [2024-04-27 00:06:58.905925] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.978 00:06:58 -- host/failover.sh@55 -- # sleep 1 00:22:29.922 00:06:59 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:29.922 [2024-04-27 00:07:00.086571] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086612] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086619] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086627] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086633] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086640] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086647] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086653] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086659] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086666] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086672] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086678] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086685] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086697] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086704] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 [2024-04-27 00:07:00.086710] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8a6a0 is same with the state(5) to be set 00:22:29.922 00:07:00 -- host/failover.sh@59 -- # wait 488274 00:22:36.519 0 00:22:36.519 00:07:06 -- host/failover.sh@61 -- # killprocess 487991 00:22:36.519 00:07:06 -- common/autotest_common.sh@936 -- # '[' -z 487991 ']' 00:22:36.519 00:07:06 -- common/autotest_common.sh@940 -- # kill -0 487991 00:22:36.519 00:07:06 -- common/autotest_common.sh@941 -- # uname 00:22:36.519 00:07:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.519 00:07:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 487991 00:22:36.519 00:07:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:36.519 00:07:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:36.519 00:07:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 487991' 00:22:36.519 killing process with pid 487991 00:22:36.519 00:07:06 -- common/autotest_common.sh@955 -- # kill 487991 00:22:36.519 00:07:06 -- common/autotest_common.sh@960 -- # wait 487991 00:22:36.519 00:07:06 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:36.519 [2024-04-27 00:06:49.462363] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:22:36.519 [2024-04-27 00:06:49.462421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487991 ] 00:22:36.519 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.519 [2024-04-27 00:06:49.522316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.519 [2024-04-27 00:06:49.586500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.519 Running I/O for 15 seconds... 00:22:36.519 [2024-04-27 00:06:52.182718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.182988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.182997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.183005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.183014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.519 [2024-04-27 00:06:52.183021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.519 [2024-04-27 00:06:52.183030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.520 [2024-04-27 00:06:52.183436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.520 [2024-04-27 00:06:52.183443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.521 [2024-04-27 00:06:52.183866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.521 [2024-04-27 00:06:52.183875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.183886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.183895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.183902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.183911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.183918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.183927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.183934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.183943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.183950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.183959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.183966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.183975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.183982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.183992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.183999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.522 [2024-04-27 00:06:52.184241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.522 [2024-04-27 00:06:52.184249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.523 [2024-04-27 00:06:52.184617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.523 [2024-04-27 00:06:52.184626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:52.184848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x929580 is same with the state(5) to be set 00:22:36.524 [2024-04-27 00:06:52.184865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.524 [2024-04-27 00:06:52.184870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.524 [2024-04-27 00:06:52.184877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98344 len:8 PRP1 0x0 PRP2 0x0 00:22:36.524 [2024-04-27 00:06:52.184885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184922] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x929580 was disconnected and freed. reset controller. 00:22:36.524 [2024-04-27 00:06:52.184931] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:36.524 [2024-04-27 00:06:52.184950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.524 [2024-04-27 00:06:52.184958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.524 [2024-04-27 00:06:52.184974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.524 [2024-04-27 00:06:52.184990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.184997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.524 [2024-04-27 00:06:52.185004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:52.185012] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:36.524 [2024-04-27 00:06:52.185050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x918e50 (9): Bad file descriptor 00:22:36.524 [2024-04-27 00:06:52.188559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:36.524 [2024-04-27 00:06:52.231159] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:36.524 [2024-04-27 00:06:55.733527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.524 [2024-04-27 00:06:55.733564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:55.733574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.524 [2024-04-27 00:06:55.733587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:55.733595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.524 [2024-04-27 00:06:55.733602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:55.733610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.524 [2024-04-27 00:06:55.733617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:55.733625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x918e50 is same with the state(5) to be set 00:22:36.524 [2024-04-27 00:06:55.733672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:55.733682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.524 [2024-04-27 00:06:55.733695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.524 [2024-04-27 00:06:55.733703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.733987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.733996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.734003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.734013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.734020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.734029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.734036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.734046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.734054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.734063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.734070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.734079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.734086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.734095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.734103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.734112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.525 [2024-04-27 00:06:55.734119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.525 [2024-04-27 00:06:55.734128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.526 [2024-04-27 00:06:55.734331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.526 [2024-04-27 00:06:55.734348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.526 [2024-04-27 00:06:55.734365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.526 [2024-04-27 00:06:55.734381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.526 [2024-04-27 00:06:55.734389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.526 [2024-04-27 00:06:55.734396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.527 [2024-04-27 00:06:55.734588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.527 [2024-04-27 00:06:55.734810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.527 [2024-04-27 00:06:55.734817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.734826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.734832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.734845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.734852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.734861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.734870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.734878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.734885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.734895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.734902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.734911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.734918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.734926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.734933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.734942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.734949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.734959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.734966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.734974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.734981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.734990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.734997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.528 [2024-04-27 00:06:55.735262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.528 [2024-04-27 00:06:55.735270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.529 [2024-04-27 00:06:55.735700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.529 [2024-04-27 00:06:55.735707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:06:55.735716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:06:55.735722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:06:55.735732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:06:55.735739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:06:55.735748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:06:55.735755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:06:55.735773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.530 [2024-04-27 00:06:55.735779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.530 [2024-04-27 00:06:55.735787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108160 len:8 PRP1 0x0 PRP2 0x0 00:22:36.530 [2024-04-27 00:06:55.735794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:06:55.735829] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9252f0 was disconnected and freed. reset controller. 00:22:36.530 [2024-04-27 00:06:55.735843] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:36.530 [2024-04-27 00:06:55.735851] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:36.530 [2024-04-27 00:06:55.739310] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:36.530 [2024-04-27 00:06:55.739335] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x918e50 (9): Bad file descriptor 00:22:36.530 [2024-04-27 00:06:55.901544] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:36.530 [2024-04-27 00:07:00.089232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.530 [2024-04-27 00:07:00.089416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.530 [2024-04-27 00:07:00.089434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.530 [2024-04-27 00:07:00.089451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.530 [2024-04-27 00:07:00.089469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.530 [2024-04-27 00:07:00.089487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.530 [2024-04-27 00:07:00.089503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.530 [2024-04-27 00:07:00.089708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.530 [2024-04-27 00:07:00.089716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.089989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.089996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.090006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.090013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.090022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.090029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.090038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.090045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.090055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.090063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.090072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.090079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.090088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.090095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.090104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.090111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.090120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.090128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.090136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.090143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.531 [2024-04-27 00:07:00.090152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.531 [2024-04-27 00:07:00.090160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.532 [2024-04-27 00:07:00.090431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.532 [2024-04-27 00:07:00.090438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.533 [2024-04-27 00:07:00.090851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.533 [2024-04-27 00:07:00.090859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.534 [2024-04-27 00:07:00.090866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.090875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.534 [2024-04-27 00:07:00.090882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.090891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.534 [2024-04-27 00:07:00.090899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.090908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.534 [2024-04-27 00:07:00.090915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.090924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.534 [2024-04-27 00:07:00.090931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.090940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.534 [2024-04-27 00:07:00.090947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.090956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.534 [2024-04-27 00:07:00.090963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.090972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.534 [2024-04-27 00:07:00.090979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.090988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.534 [2024-04-27 00:07:00.090995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-04-27 00:07:00.091195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.534 [2024-04-27 00:07:00.091224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69392 len:8 PRP1 0x0 PRP2 0x0 00:22:36.534 [2024-04-27 00:07:00.091231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.534 [2024-04-27 00:07:00.091247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.534 [2024-04-27 00:07:00.091258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69400 len:8 PRP1 0x0 PRP2 0x0 00:22:36.534 [2024-04-27 00:07:00.091266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.534 [2024-04-27 00:07:00.091274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.534 [2024-04-27 00:07:00.091279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.534 [2024-04-27 00:07:00.091285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69408 len:8 PRP1 0x0 PRP2 0x0 00:22:36.535 [2024-04-27 00:07:00.091292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.535 [2024-04-27 00:07:00.091304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.535 [2024-04-27 00:07:00.091311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70264 len:8 PRP1 0x0 PRP2 0x0 00:22:36.535 [2024-04-27 00:07:00.091318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.535 [2024-04-27 00:07:00.091331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.535 [2024-04-27 00:07:00.091337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69416 len:8 PRP1 0x0 PRP2 0x0 00:22:36.535 [2024-04-27 00:07:00.091344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.535 [2024-04-27 00:07:00.091357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.535 [2024-04-27 00:07:00.091363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69424 len:8 PRP1 0x0 PRP2 0x0 00:22:36.535 [2024-04-27 00:07:00.091370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.535 [2024-04-27 00:07:00.091383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.535 [2024-04-27 00:07:00.091389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69432 len:8 PRP1 0x0 PRP2 0x0 00:22:36.535 [2024-04-27 00:07:00.091396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.535 [2024-04-27 00:07:00.091409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.535 [2024-04-27 00:07:00.091416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69440 len:8 PRP1 0x0 PRP2 0x0 00:22:36.535 [2024-04-27 00:07:00.091423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.535 [2024-04-27 00:07:00.091436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.535 [2024-04-27 00:07:00.091443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69448 len:8 PRP1 0x0 PRP2 0x0 00:22:36.535 [2024-04-27 00:07:00.091450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.535 [2024-04-27 00:07:00.091464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.535 [2024-04-27 00:07:00.091470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69456 len:8 PRP1 0x0 PRP2 0x0 00:22:36.535 [2024-04-27 00:07:00.091477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.535 [2024-04-27 00:07:00.091490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.535 [2024-04-27 00:07:00.091496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69464 len:8 PRP1 0x0 PRP2 0x0 00:22:36.535 [2024-04-27 00:07:00.091503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.535 [2024-04-27 00:07:00.091516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.535 [2024-04-27 00:07:00.091522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69472 len:8 PRP1 0x0 PRP2 0x0 00:22:36.535 [2024-04-27 00:07:00.091528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091564] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xae2d20 was disconnected and freed. reset controller. 00:22:36.535 [2024-04-27 00:07:00.091574] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:36.535 [2024-04-27 00:07:00.091593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.535 [2024-04-27 00:07:00.091601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.535 [2024-04-27 00:07:00.091617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.535 [2024-04-27 00:07:00.091632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.535 [2024-04-27 00:07:00.091647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.535 [2024-04-27 00:07:00.091655] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:36.535 [2024-04-27 00:07:00.095121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:36.535 [2024-04-27 00:07:00.095149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x918e50 (9): Bad file descriptor 00:22:36.535 [2024-04-27 00:07:00.309280] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:36.535 00:22:36.535 Latency(us) 00:22:36.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.536 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:36.536 Verification LBA range: start 0x0 length 0x4000 00:22:36.536 NVMe0n1 : 15.01 9973.34 38.96 1021.99 0.00 11614.22 774.83 15837.87 00:22:36.536 =================================================================================================================== 00:22:36.536 Total : 9973.34 38.96 1021.99 0.00 11614.22 774.83 15837.87 00:22:36.536 Received shutdown signal, test time was about 15.000000 seconds 00:22:36.536 00:22:36.536 Latency(us) 00:22:36.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.536 =================================================================================================================== 00:22:36.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.536 00:07:06 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:36.536 00:07:06 -- host/failover.sh@65 -- # count=3 00:22:36.536 00:07:06 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:36.536 00:07:06 -- host/failover.sh@73 -- # bdevperf_pid=491185 00:22:36.536 00:07:06 -- host/failover.sh@75 -- # waitforlisten 491185 /var/tmp/bdevperf.sock 00:22:36.536 00:07:06 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:36.536 00:07:06 -- common/autotest_common.sh@817 -- # '[' -z 491185 ']' 00:22:36.536 00:07:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.536 00:07:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:36.536 00:07:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.536 00:07:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:36.536 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:22:37.109 00:07:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:37.109 00:07:07 -- common/autotest_common.sh@850 -- # return 0 00:22:37.109 00:07:07 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:37.371 [2024-04-27 00:07:07.339295] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:37.371 00:07:07 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:37.371 [2024-04-27 00:07:07.499675] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:37.371 00:07:07 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:37.943 NVMe0n1 00:22:37.943 00:07:07 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.203 00:22:38.203 00:07:08 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.464 00:22:38.464 00:07:08 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:38.464 00:07:08 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:38.725 00:07:08 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.985 00:07:08 -- host/failover.sh@87 -- # sleep 3 00:22:42.289 00:07:11 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:42.289 00:07:11 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:42.289 00:07:12 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:42.289 00:07:12 -- host/failover.sh@90 -- # run_test_pid=492453 00:22:42.289 00:07:12 -- host/failover.sh@92 -- # wait 492453 00:22:43.232 0 00:22:43.232 00:07:13 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:43.232 [2024-04-27 00:07:06.438997] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:22:43.232 [2024-04-27 00:07:06.439057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491185 ] 00:22:43.232 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.232 [2024-04-27 00:07:06.499982] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.232 [2024-04-27 00:07:06.561981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.232 [2024-04-27 00:07:08.940431] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:43.232 [2024-04-27 00:07:08.940477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.232 [2024-04-27 00:07:08.940488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.232 [2024-04-27 00:07:08.940497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.232 [2024-04-27 00:07:08.940505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.232 [2024-04-27 00:07:08.940513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.232 [2024-04-27 00:07:08.940520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.232 [2024-04-27 00:07:08.940528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.232 [2024-04-27 00:07:08.940535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.232 [2024-04-27 00:07:08.940542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:43.232 [2024-04-27 00:07:08.940572] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:43.232 [2024-04-27 00:07:08.940587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6e50 (9): Bad file descriptor 00:22:43.232 [2024-04-27 00:07:08.953813] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:43.232 Running I/O for 1 seconds... 00:22:43.232 00:22:43.232 Latency(us) 00:22:43.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.232 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:43.232 Verification LBA range: start 0x0 length 0x4000 00:22:43.232 NVMe0n1 : 1.01 12059.34 47.11 0.00 0.00 10557.76 1590.61 10158.08 00:22:43.232 =================================================================================================================== 00:22:43.232 Total : 12059.34 47.11 0.00 0.00 10557.76 1590.61 10158.08 00:22:43.232 00:07:13 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:43.232 00:07:13 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:43.232 00:07:13 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:43.494 00:07:13 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:43.494 00:07:13 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:43.754 00:07:13 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:43.754 00:07:13 -- host/failover.sh@101 -- # sleep 3 00:22:47.191 00:07:16 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:47.191 00:07:16 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:47.191 00:07:17 -- host/failover.sh@108 -- # killprocess 491185 00:22:47.191 00:07:17 -- common/autotest_common.sh@936 -- # '[' -z 491185 ']' 00:22:47.191 00:07:17 -- common/autotest_common.sh@940 -- # kill -0 491185 00:22:47.191 00:07:17 -- common/autotest_common.sh@941 -- # uname 00:22:47.191 00:07:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.191 00:07:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 491185 00:22:47.191 00:07:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:47.192 00:07:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:47.192 00:07:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 491185' 00:22:47.192 killing process with pid 491185 00:22:47.192 00:07:17 -- common/autotest_common.sh@955 -- # kill 491185 00:22:47.192 00:07:17 -- common/autotest_common.sh@960 -- # wait 491185 00:22:47.192 00:07:17 -- host/failover.sh@110 -- # sync 00:22:47.192 00:07:17 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.453 00:07:17 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:47.453 00:07:17 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:47.453 00:07:17 -- host/failover.sh@116 -- # nvmftestfini 00:22:47.453 00:07:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:47.453 00:07:17 -- nvmf/common.sh@117 -- # sync 00:22:47.453 00:07:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:47.453 00:07:17 -- nvmf/common.sh@120 -- # set +e 00:22:47.453 00:07:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.453 00:07:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:47.453 rmmod nvme_tcp 00:22:47.453 rmmod nvme_fabrics 00:22:47.453 rmmod nvme_keyring 00:22:47.453 00:07:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:47.453 00:07:17 -- nvmf/common.sh@124 -- # set -e 00:22:47.453 00:07:17 -- nvmf/common.sh@125 -- # return 0 00:22:47.453 00:07:17 -- nvmf/common.sh@478 -- # '[' -n 487480 ']' 00:22:47.453 00:07:17 -- nvmf/common.sh@479 -- # killprocess 487480 00:22:47.453 00:07:17 -- common/autotest_common.sh@936 -- # '[' -z 487480 ']' 00:22:47.453 00:07:17 -- common/autotest_common.sh@940 -- # kill -0 487480 00:22:47.453 00:07:17 -- common/autotest_common.sh@941 -- # uname 00:22:47.453 00:07:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.453 00:07:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 487480 00:22:47.453 00:07:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:47.453 00:07:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:47.453 00:07:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 487480' 00:22:47.453 killing process with pid 487480 00:22:47.453 00:07:17 -- common/autotest_common.sh@955 -- # kill 487480 00:22:47.453 00:07:17 -- common/autotest_common.sh@960 -- # wait 487480 00:22:47.713 00:07:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:47.713 00:07:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:47.713 00:07:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:47.713 00:07:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.713 00:07:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:47.713 00:07:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.713 00:07:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.713 00:07:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.657 00:07:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.657 00:22:49.657 real 0m39.183s 00:22:49.657 user 2m3.037s 00:22:49.657 sys 0m7.622s 00:22:49.657 00:07:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:49.657 00:07:19 -- common/autotest_common.sh@10 -- # set +x 00:22:49.657 ************************************ 00:22:49.657 END TEST nvmf_failover 00:22:49.657 ************************************ 00:22:49.657 00:07:19 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:49.657 00:07:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:49.657 00:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:49.658 00:07:19 -- common/autotest_common.sh@10 -- # set +x 00:22:49.918 ************************************ 00:22:49.918 START TEST nvmf_discovery 00:22:49.918 ************************************ 00:22:49.918 00:07:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:49.918 * Looking for test storage... 00:22:49.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.918 00:07:20 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.918 00:07:20 -- nvmf/common.sh@7 -- # uname -s 00:22:49.918 00:07:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.918 00:07:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.918 00:07:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.918 00:07:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.918 00:07:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.918 00:07:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.918 00:07:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.918 00:07:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.918 00:07:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.918 00:07:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.918 00:07:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.918 00:07:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.918 00:07:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.918 00:07:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.918 00:07:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.918 00:07:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.918 00:07:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.918 00:07:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.918 00:07:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.918 00:07:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.919 00:07:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.919 00:07:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.919 00:07:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.919 00:07:20 -- paths/export.sh@5 -- # export PATH 00:22:49.919 00:07:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.919 00:07:20 -- nvmf/common.sh@47 -- # : 0 00:22:49.919 00:07:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.919 00:07:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.919 00:07:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.919 00:07:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.919 00:07:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.919 00:07:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.919 00:07:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.919 00:07:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.919 00:07:20 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:49.919 00:07:20 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:49.919 00:07:20 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:49.919 00:07:20 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:49.919 00:07:20 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:49.919 00:07:20 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:49.919 00:07:20 -- host/discovery.sh@25 -- # nvmftestinit 00:22:49.919 00:07:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:49.919 00:07:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.919 00:07:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:49.919 00:07:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:49.919 00:07:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:49.919 00:07:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.919 00:07:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.919 00:07:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.919 00:07:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:49.919 00:07:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:49.919 00:07:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.919 00:07:20 -- common/autotest_common.sh@10 -- # set +x 00:22:58.066 00:07:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:58.066 00:07:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:58.066 00:07:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:58.066 00:07:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:58.066 00:07:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:58.066 00:07:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:58.066 00:07:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:58.066 00:07:26 -- nvmf/common.sh@295 -- # net_devs=() 00:22:58.066 00:07:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:58.066 00:07:26 -- nvmf/common.sh@296 -- # e810=() 00:22:58.066 00:07:26 -- nvmf/common.sh@296 -- # local -ga e810 00:22:58.066 00:07:26 -- nvmf/common.sh@297 -- # x722=() 00:22:58.066 00:07:26 -- nvmf/common.sh@297 -- # local -ga x722 00:22:58.066 00:07:26 -- nvmf/common.sh@298 -- # mlx=() 00:22:58.066 00:07:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:58.066 00:07:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.066 00:07:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.066 00:07:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.066 00:07:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.066 00:07:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.066 00:07:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.066 00:07:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.066 00:07:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.066 00:07:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.066 00:07:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.066 00:07:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.066 00:07:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:58.066 00:07:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:58.066 00:07:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:58.066 00:07:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.066 00:07:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:58.066 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:58.066 00:07:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.066 00:07:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:58.066 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:58.066 00:07:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:58.066 00:07:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.066 00:07:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.066 00:07:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:58.066 00:07:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.066 00:07:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:58.066 Found net devices under 0000:31:00.0: cvl_0_0 00:22:58.066 00:07:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.066 00:07:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.066 00:07:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.066 00:07:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:58.066 00:07:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.066 00:07:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:58.066 Found net devices under 0000:31:00.1: cvl_0_1 00:22:58.066 00:07:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.066 00:07:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:58.066 00:07:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:58.066 00:07:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:58.066 00:07:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:58.066 00:07:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.066 00:07:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.066 00:07:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.066 00:07:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:58.066 00:07:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.066 00:07:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.066 00:07:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:58.066 00:07:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.066 00:07:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.066 00:07:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:58.066 00:07:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:58.066 00:07:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.066 00:07:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.066 00:07:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.066 00:07:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.066 00:07:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:58.066 00:07:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.066 00:07:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.066 00:07:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.066 00:07:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:58.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:22:58.066 00:22:58.066 --- 10.0.0.2 ping statistics --- 00:22:58.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.066 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:22:58.066 00:07:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:22:58.066 00:22:58.066 --- 10.0.0.1 ping statistics --- 00:22:58.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.066 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:22:58.066 00:07:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.066 00:07:27 -- nvmf/common.sh@411 -- # return 0 00:22:58.066 00:07:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:58.066 00:07:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.066 00:07:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:58.066 00:07:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:58.066 00:07:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.066 00:07:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:58.066 00:07:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:58.066 00:07:27 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:58.066 00:07:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:58.067 00:07:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:58.067 00:07:27 -- common/autotest_common.sh@10 -- # set +x 00:22:58.067 00:07:27 -- nvmf/common.sh@470 -- # nvmfpid=497593 00:22:58.067 00:07:27 -- nvmf/common.sh@471 -- # waitforlisten 497593 00:22:58.067 00:07:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:58.067 00:07:27 -- common/autotest_common.sh@817 -- # '[' -z 497593 ']' 00:22:58.067 00:07:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.067 00:07:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:58.067 00:07:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.067 00:07:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:58.067 00:07:27 -- common/autotest_common.sh@10 -- # set +x 00:22:58.067 [2024-04-27 00:07:27.297098] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:22:58.067 [2024-04-27 00:07:27.297162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.067 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.067 [2024-04-27 00:07:27.367772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.067 [2024-04-27 00:07:27.440859] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.067 [2024-04-27 00:07:27.440901] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.067 [2024-04-27 00:07:27.440913] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.067 [2024-04-27 00:07:27.440919] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.067 [2024-04-27 00:07:27.440925] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.067 [2024-04-27 00:07:27.440948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.067 00:07:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:58.067 00:07:28 -- common/autotest_common.sh@850 -- # return 0 00:22:58.067 00:07:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:58.067 00:07:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:58.067 00:07:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.067 00:07:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.067 00:07:28 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.067 00:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.067 00:07:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.067 [2024-04-27 00:07:28.107521] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.067 00:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.067 00:07:28 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:58.067 00:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.067 00:07:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.067 [2024-04-27 00:07:28.119668] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:58.067 00:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.067 00:07:28 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:58.067 00:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.067 00:07:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.067 null0 00:22:58.067 00:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.067 00:07:28 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:58.067 00:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.067 00:07:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.067 null1 00:22:58.067 00:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.067 00:07:28 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:58.067 00:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.067 00:07:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.067 00:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.067 00:07:28 -- host/discovery.sh@45 -- # hostpid=497881 00:22:58.067 00:07:28 -- host/discovery.sh@46 -- # waitforlisten 497881 /tmp/host.sock 00:22:58.067 00:07:28 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:58.067 00:07:28 -- common/autotest_common.sh@817 -- # '[' -z 497881 ']' 00:22:58.067 00:07:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:22:58.067 00:07:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:58.067 00:07:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:58.067 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:58.067 00:07:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:58.067 00:07:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.067 [2024-04-27 00:07:28.206346] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:22:58.067 [2024-04-27 00:07:28.206391] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid497881 ] 00:22:58.067 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.067 [2024-04-27 00:07:28.266372] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.329 [2024-04-27 00:07:28.330476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.901 00:07:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:58.901 00:07:28 -- common/autotest_common.sh@850 -- # return 0 00:22:58.901 00:07:28 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.901 00:07:28 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:58.901 00:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.901 00:07:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.901 00:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.901 00:07:28 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:58.901 00:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.901 00:07:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.901 00:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.901 00:07:28 -- host/discovery.sh@72 -- # notify_id=0 00:22:58.901 00:07:28 -- host/discovery.sh@83 -- # get_subsystem_names 00:22:58.901 00:07:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:58.901 00:07:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:58.901 00:07:28 -- host/discovery.sh@59 -- # sort 00:22:58.901 00:07:28 -- host/discovery.sh@59 -- # xargs 00:22:58.901 00:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.901 00:07:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.901 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.901 00:07:29 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:58.901 00:07:29 -- host/discovery.sh@84 -- # get_bdev_list 00:22:58.901 00:07:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.901 00:07:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:58.901 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.901 00:07:29 -- host/discovery.sh@55 -- # xargs 00:22:58.901 00:07:29 -- host/discovery.sh@55 -- # sort 00:22:58.901 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:58.901 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.902 00:07:29 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:58.902 00:07:29 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:58.902 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.902 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:58.902 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.902 00:07:29 -- host/discovery.sh@87 -- # get_subsystem_names 00:22:58.902 00:07:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:58.902 00:07:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:58.902 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.902 00:07:29 -- host/discovery.sh@59 -- # sort 00:22:58.902 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:58.902 00:07:29 -- host/discovery.sh@59 -- # xargs 00:22:58.902 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.163 00:07:29 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:59.163 00:07:29 -- host/discovery.sh@88 -- # get_bdev_list 00:22:59.163 00:07:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.163 00:07:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.163 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.163 00:07:29 -- host/discovery.sh@55 -- # sort 00:22:59.163 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.163 00:07:29 -- host/discovery.sh@55 -- # xargs 00:22:59.163 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.163 00:07:29 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:59.163 00:07:29 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:59.163 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.163 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.163 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.163 00:07:29 -- host/discovery.sh@91 -- # get_subsystem_names 00:22:59.163 00:07:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.163 00:07:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:59.163 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.163 00:07:29 -- host/discovery.sh@59 -- # sort 00:22:59.163 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.163 00:07:29 -- host/discovery.sh@59 -- # xargs 00:22:59.163 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.163 00:07:29 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:59.163 00:07:29 -- host/discovery.sh@92 -- # get_bdev_list 00:22:59.163 00:07:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.163 00:07:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.163 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.163 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.163 00:07:29 -- host/discovery.sh@55 -- # sort 00:22:59.163 00:07:29 -- host/discovery.sh@55 -- # xargs 00:22:59.163 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.163 00:07:29 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:59.163 00:07:29 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:59.163 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.163 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.163 [2024-04-27 00:07:29.334892] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.163 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.163 00:07:29 -- host/discovery.sh@97 -- # get_subsystem_names 00:22:59.163 00:07:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.163 00:07:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:59.163 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.163 00:07:29 -- host/discovery.sh@59 -- # sort 00:22:59.163 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.163 00:07:29 -- host/discovery.sh@59 -- # xargs 00:22:59.163 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.424 00:07:29 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:59.424 00:07:29 -- host/discovery.sh@98 -- # get_bdev_list 00:22:59.424 00:07:29 -- host/discovery.sh@55 -- # xargs 00:22:59.424 00:07:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.424 00:07:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.424 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.424 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.424 00:07:29 -- host/discovery.sh@55 -- # sort 00:22:59.424 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.424 00:07:29 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:59.424 00:07:29 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:59.424 00:07:29 -- host/discovery.sh@79 -- # expected_count=0 00:22:59.424 00:07:29 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:59.424 00:07:29 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:59.424 00:07:29 -- common/autotest_common.sh@901 -- # local max=10 00:22:59.424 00:07:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:59.424 00:07:29 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:59.424 00:07:29 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:59.424 00:07:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:59.424 00:07:29 -- host/discovery.sh@74 -- # jq '. | length' 00:22:59.424 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.424 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.424 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.424 00:07:29 -- host/discovery.sh@74 -- # notification_count=0 00:22:59.424 00:07:29 -- host/discovery.sh@75 -- # notify_id=0 00:22:59.424 00:07:29 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:59.424 00:07:29 -- common/autotest_common.sh@904 -- # return 0 00:22:59.424 00:07:29 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:59.424 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.424 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.424 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.424 00:07:29 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:59.424 00:07:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:59.424 00:07:29 -- common/autotest_common.sh@901 -- # local max=10 00:22:59.424 00:07:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:59.424 00:07:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:59.424 00:07:29 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:59.424 00:07:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.424 00:07:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:59.424 00:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.424 00:07:29 -- host/discovery.sh@59 -- # sort 00:22:59.424 00:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.424 00:07:29 -- host/discovery.sh@59 -- # xargs 00:22:59.424 00:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.424 00:07:29 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:22:59.424 00:07:29 -- common/autotest_common.sh@906 -- # sleep 1 00:22:59.997 [2024-04-27 00:07:30.038040] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:59.997 [2024-04-27 00:07:30.038060] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:59.997 [2024-04-27 00:07:30.038078] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:59.997 [2024-04-27 00:07:30.126352] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:00.258 [2024-04-27 00:07:30.351484] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:00.258 [2024-04-27 00:07:30.351515] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:00.518 00:07:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:00.518 00:07:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:00.518 00:07:30 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:00.518 00:07:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.518 00:07:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:00.518 00:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.518 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.518 00:07:30 -- host/discovery.sh@59 -- # sort 00:23:00.518 00:07:30 -- host/discovery.sh@59 -- # xargs 00:23:00.518 00:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.518 00:07:30 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.518 00:07:30 -- common/autotest_common.sh@904 -- # return 0 00:23:00.518 00:07:30 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:00.518 00:07:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:00.518 00:07:30 -- common/autotest_common.sh@901 -- # local max=10 00:23:00.518 00:07:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:00.518 00:07:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:00.518 00:07:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:00.518 00:07:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.518 00:07:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:00.518 00:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.518 00:07:30 -- host/discovery.sh@55 -- # sort 00:23:00.518 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.518 00:07:30 -- host/discovery.sh@55 -- # xargs 00:23:00.518 00:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.518 00:07:30 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:00.518 00:07:30 -- common/autotest_common.sh@904 -- # return 0 00:23:00.518 00:07:30 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:00.518 00:07:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:00.518 00:07:30 -- common/autotest_common.sh@901 -- # local max=10 00:23:00.518 00:07:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:00.518 00:07:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:00.518 00:07:30 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:00.518 00:07:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:00.518 00:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.518 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.518 00:07:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:00.518 00:07:30 -- host/discovery.sh@63 -- # sort -n 00:23:00.518 00:07:30 -- host/discovery.sh@63 -- # xargs 00:23:00.518 00:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.518 00:07:30 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:23:00.518 00:07:30 -- common/autotest_common.sh@904 -- # return 0 00:23:00.518 00:07:30 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:00.518 00:07:30 -- host/discovery.sh@79 -- # expected_count=1 00:23:00.518 00:07:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:00.518 00:07:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:00.518 00:07:30 -- common/autotest_common.sh@901 -- # local max=10 00:23:00.518 00:07:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:00.518 00:07:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:00.518 00:07:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:00.518 00:07:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:00.518 00:07:30 -- host/discovery.sh@74 -- # jq '. | length' 00:23:00.518 00:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.518 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.779 00:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.779 00:07:30 -- host/discovery.sh@74 -- # notification_count=1 00:23:00.780 00:07:30 -- host/discovery.sh@75 -- # notify_id=1 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:00.780 00:07:30 -- common/autotest_common.sh@904 -- # return 0 00:23:00.780 00:07:30 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:00.780 00:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.780 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.780 00:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.780 00:07:30 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:00.780 00:07:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:00.780 00:07:30 -- common/autotest_common.sh@901 -- # local max=10 00:23:00.780 00:07:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:00.780 00:07:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.780 00:07:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:00.780 00:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.780 00:07:30 -- host/discovery.sh@55 -- # sort 00:23:00.780 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.780 00:07:30 -- host/discovery.sh@55 -- # xargs 00:23:00.780 00:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:00.780 00:07:30 -- common/autotest_common.sh@904 -- # return 0 00:23:00.780 00:07:30 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:00.780 00:07:30 -- host/discovery.sh@79 -- # expected_count=1 00:23:00.780 00:07:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:00.780 00:07:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:00.780 00:07:30 -- common/autotest_common.sh@901 -- # local max=10 00:23:00.780 00:07:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:00.780 00:07:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:00.780 00:07:30 -- host/discovery.sh@74 -- # jq '. | length' 00:23:00.780 00:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.780 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.780 00:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.780 00:07:30 -- host/discovery.sh@74 -- # notification_count=1 00:23:00.780 00:07:30 -- host/discovery.sh@75 -- # notify_id=2 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:00.780 00:07:30 -- common/autotest_common.sh@904 -- # return 0 00:23:00.780 00:07:30 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:00.780 00:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.780 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.780 [2024-04-27 00:07:30.891205] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:00.780 [2024-04-27 00:07:30.892428] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:00.780 [2024-04-27 00:07:30.892454] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:00.780 00:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.780 00:07:30 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:00.780 00:07:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:00.780 00:07:30 -- common/autotest_common.sh@901 -- # local max=10 00:23:00.780 00:07:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:00.780 00:07:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.780 00:07:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:00.780 00:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.780 00:07:30 -- host/discovery.sh@59 -- # sort 00:23:00.780 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.780 00:07:30 -- host/discovery.sh@59 -- # xargs 00:23:00.780 00:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.780 00:07:30 -- common/autotest_common.sh@904 -- # return 0 00:23:00.780 00:07:30 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:00.780 00:07:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:00.780 00:07:30 -- common/autotest_common.sh@901 -- # local max=10 00:23:00.780 00:07:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:00.780 00:07:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:00.780 00:07:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.780 00:07:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:00.780 00:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.780 00:07:30 -- host/discovery.sh@55 -- # sort 00:23:00.780 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.780 00:07:30 -- host/discovery.sh@55 -- # xargs 00:23:00.780 00:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.041 00:07:31 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:01.042 00:07:31 -- common/autotest_common.sh@904 -- # return 0 00:23:01.042 00:07:31 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:01.042 00:07:31 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:01.042 00:07:31 -- common/autotest_common.sh@901 -- # local max=10 00:23:01.042 00:07:31 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:01.042 00:07:31 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:01.042 00:07:31 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:01.042 00:07:31 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:01.042 00:07:31 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:01.042 00:07:31 -- host/discovery.sh@63 -- # sort -n 00:23:01.042 00:07:31 -- host/discovery.sh@63 -- # xargs 00:23:01.042 00:07:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.042 00:07:31 -- common/autotest_common.sh@10 -- # set +x 00:23:01.042 00:07:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.042 [2024-04-27 00:07:31.022858] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:01.042 00:07:31 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:01.042 00:07:31 -- common/autotest_common.sh@906 -- # sleep 1 00:23:01.302 [2024-04-27 00:07:31.287194] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:01.302 [2024-04-27 00:07:31.287224] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:01.302 [2024-04-27 00:07:31.287231] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:01.888 00:07:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:01.888 00:07:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:01.888 00:07:32 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:01.888 00:07:32 -- host/discovery.sh@63 -- # sort -n 00:23:01.888 00:07:32 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:01.888 00:07:32 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:01.888 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.888 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:01.888 00:07:32 -- host/discovery.sh@63 -- # xargs 00:23:01.888 00:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.148 00:07:32 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:02.148 00:07:32 -- common/autotest_common.sh@904 -- # return 0 00:23:02.148 00:07:32 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:02.148 00:07:32 -- host/discovery.sh@79 -- # expected_count=0 00:23:02.148 00:07:32 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:02.148 00:07:32 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:02.148 00:07:32 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.148 00:07:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.148 00:07:32 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:02.148 00:07:32 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:02.148 00:07:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:02.148 00:07:32 -- host/discovery.sh@74 -- # jq '. | length' 00:23:02.148 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.148 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:02.148 00:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.148 00:07:32 -- host/discovery.sh@74 -- # notification_count=0 00:23:02.148 00:07:32 -- host/discovery.sh@75 -- # notify_id=2 00:23:02.148 00:07:32 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:02.148 00:07:32 -- common/autotest_common.sh@904 -- # return 0 00:23:02.148 00:07:32 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:02.149 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.149 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:02.149 [2024-04-27 00:07:32.171696] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:02.149 [2024-04-27 00:07:32.171717] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:02.149 [2024-04-27 00:07:32.173062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.149 [2024-04-27 00:07:32.173081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.149 [2024-04-27 00:07:32.173091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.149 [2024-04-27 00:07:32.173099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.149 [2024-04-27 00:07:32.173107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.149 [2024-04-27 00:07:32.173114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.149 [2024-04-27 00:07:32.173130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.149 [2024-04-27 00:07:32.173137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.149 [2024-04-27 00:07:32.173145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93680 is same with the state(5) to be set 00:23:02.149 00:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.149 00:07:32 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:02.149 00:07:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:02.149 00:07:32 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.149 00:07:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.149 00:07:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:02.149 00:07:32 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:02.149 00:07:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.149 00:07:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.149 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.149 00:07:32 -- host/discovery.sh@59 -- # sort 00:23:02.149 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:02.149 00:07:32 -- host/discovery.sh@59 -- # xargs 00:23:02.149 [2024-04-27 00:07:32.183077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa93680 (9): Bad file descriptor 00:23:02.149 [2024-04-27 00:07:32.193115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:02.149 [2024-04-27 00:07:32.193419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.149 [2024-04-27 00:07:32.193771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.149 [2024-04-27 00:07:32.193781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa93680 with addr=10.0.0.2, port=4420 00:23:02.149 [2024-04-27 00:07:32.193789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93680 is same with the state(5) to be set 00:23:02.149 [2024-04-27 00:07:32.193801] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa93680 (9): Bad file descriptor 00:23:02.149 [2024-04-27 00:07:32.193818] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:02.149 [2024-04-27 00:07:32.193826] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:02.149 [2024-04-27 00:07:32.193834] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:02.149 [2024-04-27 00:07:32.193852] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.149 00:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.149 [2024-04-27 00:07:32.203171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:02.149 [2024-04-27 00:07:32.203524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.149 [2024-04-27 00:07:32.204064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.149 [2024-04-27 00:07:32.204101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa93680 with addr=10.0.0.2, port=4420 00:23:02.149 [2024-04-27 00:07:32.204112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93680 is same with the state(5) to be set 00:23:02.149 [2024-04-27 00:07:32.204131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa93680 (9): Bad file descriptor 00:23:02.149 [2024-04-27 00:07:32.204159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:02.149 [2024-04-27 00:07:32.204168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:02.149 [2024-04-27 00:07:32.204176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:02.149 [2024-04-27 00:07:32.204195] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.149 [2024-04-27 00:07:32.213225] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:02.149 [2024-04-27 00:07:32.213586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.149 [2024-04-27 00:07:32.213783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.149 [2024-04-27 00:07:32.213795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa93680 with addr=10.0.0.2, port=4420 00:23:02.149 [2024-04-27 00:07:32.213804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93680 is same with the state(5) to be set 00:23:02.149 [2024-04-27 00:07:32.213817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa93680 (9): Bad file descriptor 00:23:02.149 [2024-04-27 00:07:32.213828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:02.149 [2024-04-27 00:07:32.213834] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:02.149 [2024-04-27 00:07:32.213852] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:02.149 [2024-04-27 00:07:32.213864] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.149 00:07:32 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.149 00:07:32 -- common/autotest_common.sh@904 -- # return 0 00:23:02.149 00:07:32 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:02.149 00:07:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:02.149 00:07:32 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.149 00:07:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.149 00:07:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:02.149 00:07:32 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:02.149 00:07:32 -- host/discovery.sh@55 -- # xargs 00:23:02.149 00:07:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.149 00:07:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.150 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.150 00:07:32 -- host/discovery.sh@55 -- # sort 00:23:02.150 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:02.150 [2024-04-27 00:07:32.223279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:02.150 [2024-04-27 00:07:32.223641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.150 [2024-04-27 00:07:32.223868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.150 [2024-04-27 00:07:32.223888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa93680 with addr=10.0.0.2, port=4420 00:23:02.150 [2024-04-27 00:07:32.223897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93680 is same with the state(5) to be set 00:23:02.150 [2024-04-27 00:07:32.223911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa93680 (9): Bad file descriptor 00:23:02.150 [2024-04-27 00:07:32.223923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:02.150 [2024-04-27 00:07:32.223930] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:02.150 [2024-04-27 00:07:32.223937] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:02.150 [2024-04-27 00:07:32.223949] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.150 [2024-04-27 00:07:32.233335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:02.150 [2024-04-27 00:07:32.233666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.150 [2024-04-27 00:07:32.234126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.150 [2024-04-27 00:07:32.234167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa93680 with addr=10.0.0.2, port=4420 00:23:02.150 [2024-04-27 00:07:32.234178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93680 is same with the state(5) to be set 00:23:02.150 [2024-04-27 00:07:32.234197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa93680 (9): Bad file descriptor 00:23:02.150 [2024-04-27 00:07:32.234209] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:02.150 [2024-04-27 00:07:32.234215] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:02.150 [2024-04-27 00:07:32.234223] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:02.150 [2024-04-27 00:07:32.234238] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.150 [2024-04-27 00:07:32.243390] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:02.150 [2024-04-27 00:07:32.243578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.150 [2024-04-27 00:07:32.243862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.150 [2024-04-27 00:07:32.243873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa93680 with addr=10.0.0.2, port=4420 00:23:02.150 [2024-04-27 00:07:32.243880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93680 is same with the state(5) to be set 00:23:02.150 [2024-04-27 00:07:32.243892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa93680 (9): Bad file descriptor 00:23:02.150 [2024-04-27 00:07:32.243902] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:02.150 [2024-04-27 00:07:32.243908] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:02.150 [2024-04-27 00:07:32.243915] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:02.150 [2024-04-27 00:07:32.243926] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.150 [2024-04-27 00:07:32.253443] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:02.150 [2024-04-27 00:07:32.253670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.150 [2024-04-27 00:07:32.254112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.150 [2024-04-27 00:07:32.254122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa93680 with addr=10.0.0.2, port=4420 00:23:02.150 [2024-04-27 00:07:32.254129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93680 is same with the state(5) to be set 00:23:02.150 [2024-04-27 00:07:32.254139] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa93680 (9): Bad file descriptor 00:23:02.150 [2024-04-27 00:07:32.254149] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:02.150 [2024-04-27 00:07:32.254156] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:02.150 [2024-04-27 00:07:32.254162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:02.150 [2024-04-27 00:07:32.254173] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.150 [2024-04-27 00:07:32.260347] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:02.150 [2024-04-27 00:07:32.260365] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:02.150 00:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.150 00:07:32 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:02.150 00:07:32 -- common/autotest_common.sh@904 -- # return 0 00:23:02.150 00:07:32 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:02.150 00:07:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:02.150 00:07:32 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.150 00:07:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.150 00:07:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:02.150 00:07:32 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:02.150 00:07:32 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:02.150 00:07:32 -- host/discovery.sh@63 -- # xargs 00:23:02.150 00:07:32 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:02.150 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.150 00:07:32 -- host/discovery.sh@63 -- # sort -n 00:23:02.150 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:02.150 00:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.150 00:07:32 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:23:02.150 00:07:32 -- common/autotest_common.sh@904 -- # return 0 00:23:02.150 00:07:32 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:02.150 00:07:32 -- host/discovery.sh@79 -- # expected_count=0 00:23:02.150 00:07:32 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:02.150 00:07:32 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:02.151 00:07:32 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.151 00:07:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.151 00:07:32 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:02.151 00:07:32 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:02.151 00:07:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:02.151 00:07:32 -- host/discovery.sh@74 -- # jq '. | length' 00:23:02.151 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.151 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:02.151 00:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.151 00:07:32 -- host/discovery.sh@74 -- # notification_count=0 00:23:02.151 00:07:32 -- host/discovery.sh@75 -- # notify_id=2 00:23:02.151 00:07:32 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:02.151 00:07:32 -- common/autotest_common.sh@904 -- # return 0 00:23:02.151 00:07:32 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:02.151 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.151 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:02.411 00:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.411 00:07:32 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:02.411 00:07:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:02.411 00:07:32 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.411 00:07:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.411 00:07:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:02.411 00:07:32 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:02.411 00:07:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.411 00:07:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.411 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.411 00:07:32 -- host/discovery.sh@59 -- # sort 00:23:02.411 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:02.411 00:07:32 -- host/discovery.sh@59 -- # xargs 00:23:02.411 00:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.411 00:07:32 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:02.411 00:07:32 -- common/autotest_common.sh@904 -- # return 0 00:23:02.411 00:07:32 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:02.411 00:07:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:02.411 00:07:32 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.411 00:07:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.411 00:07:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:02.411 00:07:32 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:02.411 00:07:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.411 00:07:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.411 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.411 00:07:32 -- host/discovery.sh@55 -- # sort 00:23:02.411 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:02.411 00:07:32 -- host/discovery.sh@55 -- # xargs 00:23:02.411 00:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.411 00:07:32 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:02.411 00:07:32 -- common/autotest_common.sh@904 -- # return 0 00:23:02.411 00:07:32 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:02.411 00:07:32 -- host/discovery.sh@79 -- # expected_count=2 00:23:02.411 00:07:32 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:02.411 00:07:32 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:02.411 00:07:32 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.411 00:07:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.411 00:07:32 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:02.411 00:07:32 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:02.411 00:07:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:02.411 00:07:32 -- host/discovery.sh@74 -- # jq '. | length' 00:23:02.411 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.411 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:02.411 00:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.411 00:07:32 -- host/discovery.sh@74 -- # notification_count=2 00:23:02.411 00:07:32 -- host/discovery.sh@75 -- # notify_id=4 00:23:02.411 00:07:32 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:02.411 00:07:32 -- common/autotest_common.sh@904 -- # return 0 00:23:02.411 00:07:32 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:02.411 00:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.411 00:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:03.794 [2024-04-27 00:07:33.586047] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:03.794 [2024-04-27 00:07:33.586064] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:03.794 [2024-04-27 00:07:33.586076] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.794 [2024-04-27 00:07:33.674357] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:03.794 [2024-04-27 00:07:33.739043] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:03.794 [2024-04-27 00:07:33.739073] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:03.794 00:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.794 00:07:33 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.794 00:07:33 -- common/autotest_common.sh@638 -- # local es=0 00:23:03.794 00:07:33 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.794 00:07:33 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:03.794 00:07:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.794 00:07:33 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:03.794 00:07:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.794 00:07:33 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.794 00:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.794 00:07:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.794 request: 00:23:03.794 { 00:23:03.794 "name": "nvme", 00:23:03.794 "trtype": "tcp", 00:23:03.794 "traddr": "10.0.0.2", 00:23:03.794 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:03.794 "adrfam": "ipv4", 00:23:03.794 "trsvcid": "8009", 00:23:03.794 "wait_for_attach": true, 00:23:03.794 "method": "bdev_nvme_start_discovery", 00:23:03.794 "req_id": 1 00:23:03.794 } 00:23:03.794 Got JSON-RPC error response 00:23:03.794 response: 00:23:03.794 { 00:23:03.794 "code": -17, 00:23:03.794 "message": "File exists" 00:23:03.794 } 00:23:03.794 00:07:33 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:03.794 00:07:33 -- common/autotest_common.sh@641 -- # es=1 00:23:03.794 00:07:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:03.794 00:07:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:03.794 00:07:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:03.794 00:07:33 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:03.794 00:07:33 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:03.794 00:07:33 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:03.794 00:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.794 00:07:33 -- host/discovery.sh@67 -- # sort 00:23:03.794 00:07:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.794 00:07:33 -- host/discovery.sh@67 -- # xargs 00:23:03.794 00:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.794 00:07:33 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:03.794 00:07:33 -- host/discovery.sh@146 -- # get_bdev_list 00:23:03.794 00:07:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.794 00:07:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:03.794 00:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.794 00:07:33 -- host/discovery.sh@55 -- # sort 00:23:03.794 00:07:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.794 00:07:33 -- host/discovery.sh@55 -- # xargs 00:23:03.794 00:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.794 00:07:33 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:03.794 00:07:33 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.794 00:07:33 -- common/autotest_common.sh@638 -- # local es=0 00:23:03.794 00:07:33 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.794 00:07:33 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:03.794 00:07:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.794 00:07:33 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:03.794 00:07:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.794 00:07:33 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.794 00:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.794 00:07:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.794 request: 00:23:03.794 { 00:23:03.794 "name": "nvme_second", 00:23:03.794 "trtype": "tcp", 00:23:03.794 "traddr": "10.0.0.2", 00:23:03.794 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:03.794 "adrfam": "ipv4", 00:23:03.794 "trsvcid": "8009", 00:23:03.794 "wait_for_attach": true, 00:23:03.794 "method": "bdev_nvme_start_discovery", 00:23:03.794 "req_id": 1 00:23:03.794 } 00:23:03.794 Got JSON-RPC error response 00:23:03.794 response: 00:23:03.794 { 00:23:03.794 "code": -17, 00:23:03.794 "message": "File exists" 00:23:03.794 } 00:23:03.794 00:07:33 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:03.794 00:07:33 -- common/autotest_common.sh@641 -- # es=1 00:23:03.794 00:07:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:03.794 00:07:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:03.794 00:07:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:03.794 00:07:33 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:03.794 00:07:33 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:03.794 00:07:33 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:03.794 00:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.794 00:07:33 -- host/discovery.sh@67 -- # sort 00:23:03.794 00:07:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.794 00:07:33 -- host/discovery.sh@67 -- # xargs 00:23:03.794 00:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.794 00:07:33 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:03.794 00:07:33 -- host/discovery.sh@152 -- # get_bdev_list 00:23:03.794 00:07:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.794 00:07:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:03.794 00:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.795 00:07:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.795 00:07:33 -- host/discovery.sh@55 -- # sort 00:23:03.795 00:07:33 -- host/discovery.sh@55 -- # xargs 00:23:03.795 00:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.795 00:07:33 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:03.795 00:07:33 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:03.795 00:07:33 -- common/autotest_common.sh@638 -- # local es=0 00:23:03.795 00:07:33 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:03.795 00:07:33 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:03.795 00:07:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.795 00:07:33 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:03.795 00:07:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.795 00:07:34 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:03.795 00:07:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.795 00:07:34 -- common/autotest_common.sh@10 -- # set +x 00:23:05.179 [2024-04-27 00:07:35.010600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.179 [2024-04-27 00:07:35.010967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.180 [2024-04-27 00:07:35.010980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0c850 with addr=10.0.0.2, port=8010 00:23:05.180 [2024-04-27 00:07:35.010991] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:05.180 [2024-04-27 00:07:35.010998] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:05.180 [2024-04-27 00:07:35.011005] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:06.122 [2024-04-27 00:07:36.012932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.122 [2024-04-27 00:07:36.013160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.122 [2024-04-27 00:07:36.013171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0c850 with addr=10.0.0.2, port=8010 00:23:06.122 [2024-04-27 00:07:36.013182] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:06.122 [2024-04-27 00:07:36.013188] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:06.123 [2024-04-27 00:07:36.013195] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:07.066 [2024-04-27 00:07:37.014914] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:07.066 request: 00:23:07.066 { 00:23:07.066 "name": "nvme_second", 00:23:07.066 "trtype": "tcp", 00:23:07.066 "traddr": "10.0.0.2", 00:23:07.066 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:07.066 "adrfam": "ipv4", 00:23:07.066 "trsvcid": "8010", 00:23:07.066 "attach_timeout_ms": 3000, 00:23:07.066 "method": "bdev_nvme_start_discovery", 00:23:07.066 "req_id": 1 00:23:07.066 } 00:23:07.066 Got JSON-RPC error response 00:23:07.066 response: 00:23:07.066 { 00:23:07.066 "code": -110, 00:23:07.066 "message": "Connection timed out" 00:23:07.066 } 00:23:07.066 00:07:37 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:07.066 00:07:37 -- common/autotest_common.sh@641 -- # es=1 00:23:07.066 00:07:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:07.066 00:07:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:07.066 00:07:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:07.066 00:07:37 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:07.066 00:07:37 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:07.066 00:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.066 00:07:37 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:07.066 00:07:37 -- common/autotest_common.sh@10 -- # set +x 00:23:07.066 00:07:37 -- host/discovery.sh@67 -- # sort 00:23:07.066 00:07:37 -- host/discovery.sh@67 -- # xargs 00:23:07.066 00:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.066 00:07:37 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:07.066 00:07:37 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:07.066 00:07:37 -- host/discovery.sh@161 -- # kill 497881 00:23:07.066 00:07:37 -- host/discovery.sh@162 -- # nvmftestfini 00:23:07.066 00:07:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:07.066 00:07:37 -- nvmf/common.sh@117 -- # sync 00:23:07.066 00:07:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.066 00:07:37 -- nvmf/common.sh@120 -- # set +e 00:23:07.066 00:07:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.066 00:07:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.066 rmmod nvme_tcp 00:23:07.066 rmmod nvme_fabrics 00:23:07.066 rmmod nvme_keyring 00:23:07.066 00:07:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.067 00:07:37 -- nvmf/common.sh@124 -- # set -e 00:23:07.067 00:07:37 -- nvmf/common.sh@125 -- # return 0 00:23:07.067 00:07:37 -- nvmf/common.sh@478 -- # '[' -n 497593 ']' 00:23:07.067 00:07:37 -- nvmf/common.sh@479 -- # killprocess 497593 00:23:07.067 00:07:37 -- common/autotest_common.sh@936 -- # '[' -z 497593 ']' 00:23:07.067 00:07:37 -- common/autotest_common.sh@940 -- # kill -0 497593 00:23:07.067 00:07:37 -- common/autotest_common.sh@941 -- # uname 00:23:07.067 00:07:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:07.067 00:07:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 497593 00:23:07.067 00:07:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:07.067 00:07:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:07.067 00:07:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 497593' 00:23:07.067 killing process with pid 497593 00:23:07.067 00:07:37 -- common/autotest_common.sh@955 -- # kill 497593 00:23:07.067 00:07:37 -- common/autotest_common.sh@960 -- # wait 497593 00:23:07.328 00:07:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:07.328 00:07:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:07.328 00:07:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:07.328 00:07:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.328 00:07:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.328 00:07:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.328 00:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.328 00:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.241 00:07:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.241 00:23:09.241 real 0m19.438s 00:23:09.241 user 0m22.832s 00:23:09.241 sys 0m6.570s 00:23:09.241 00:07:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:09.241 00:07:39 -- common/autotest_common.sh@10 -- # set +x 00:23:09.241 ************************************ 00:23:09.241 END TEST nvmf_discovery 00:23:09.241 ************************************ 00:23:09.241 00:07:39 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:09.241 00:07:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:09.241 00:07:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:09.241 00:07:39 -- common/autotest_common.sh@10 -- # set +x 00:23:09.502 ************************************ 00:23:09.502 START TEST nvmf_discovery_remove_ifc 00:23:09.502 ************************************ 00:23:09.502 00:07:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:09.502 * Looking for test storage... 00:23:09.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:09.502 00:07:39 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.502 00:07:39 -- nvmf/common.sh@7 -- # uname -s 00:23:09.502 00:07:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.502 00:07:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.502 00:07:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.502 00:07:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.502 00:07:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.502 00:07:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.502 00:07:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.502 00:07:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.502 00:07:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.502 00:07:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.502 00:07:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:09.502 00:07:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:09.502 00:07:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.502 00:07:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.502 00:07:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.502 00:07:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.502 00:07:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.502 00:07:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.763 00:07:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.763 00:07:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.763 00:07:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.763 00:07:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.763 00:07:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.763 00:07:39 -- paths/export.sh@5 -- # export PATH 00:23:09.763 00:07:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.763 00:07:39 -- nvmf/common.sh@47 -- # : 0 00:23:09.763 00:07:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.763 00:07:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.763 00:07:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.763 00:07:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.764 00:07:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.764 00:07:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.764 00:07:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.764 00:07:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.764 00:07:39 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:09.764 00:07:39 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:09.764 00:07:39 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:09.764 00:07:39 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:09.764 00:07:39 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:09.764 00:07:39 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:09.764 00:07:39 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:09.764 00:07:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:09.764 00:07:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.764 00:07:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:09.764 00:07:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:09.764 00:07:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:09.764 00:07:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.764 00:07:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.764 00:07:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.764 00:07:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:09.764 00:07:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:09.764 00:07:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.764 00:07:39 -- common/autotest_common.sh@10 -- # set +x 00:23:17.928 00:07:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:17.928 00:07:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.928 00:07:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.929 00:07:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.929 00:07:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.929 00:07:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.929 00:07:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.929 00:07:46 -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.929 00:07:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.929 00:07:46 -- nvmf/common.sh@296 -- # e810=() 00:23:17.929 00:07:46 -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.929 00:07:46 -- nvmf/common.sh@297 -- # x722=() 00:23:17.929 00:07:46 -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.929 00:07:46 -- nvmf/common.sh@298 -- # mlx=() 00:23:17.929 00:07:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.929 00:07:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.929 00:07:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.929 00:07:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.929 00:07:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.929 00:07:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.929 00:07:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.929 00:07:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.929 00:07:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.929 00:07:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.929 00:07:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.929 00:07:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.929 00:07:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.929 00:07:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.929 00:07:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.929 00:07:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.929 00:07:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:17.929 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:17.929 00:07:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.929 00:07:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:17.929 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:17.929 00:07:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.929 00:07:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.929 00:07:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.929 00:07:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:17.929 00:07:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.929 00:07:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:17.929 Found net devices under 0000:31:00.0: cvl_0_0 00:23:17.929 00:07:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.929 00:07:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.929 00:07:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.929 00:07:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:17.929 00:07:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.929 00:07:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:17.929 Found net devices under 0000:31:00.1: cvl_0_1 00:23:17.929 00:07:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.929 00:07:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:17.929 00:07:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:17.929 00:07:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:17.929 00:07:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:17.929 00:07:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.929 00:07:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.929 00:07:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.929 00:07:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.929 00:07:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.929 00:07:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.929 00:07:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.929 00:07:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.929 00:07:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.929 00:07:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.929 00:07:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.929 00:07:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.929 00:07:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.929 00:07:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.929 00:07:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.929 00:07:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.929 00:07:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.929 00:07:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.929 00:07:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.929 00:07:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:23:17.929 00:23:17.929 --- 10.0.0.2 ping statistics --- 00:23:17.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.930 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:23:17.930 00:07:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:23:17.930 00:23:17.930 --- 10.0.0.1 ping statistics --- 00:23:17.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.930 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:23:17.930 00:07:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.930 00:07:47 -- nvmf/common.sh@411 -- # return 0 00:23:17.930 00:07:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:17.930 00:07:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.930 00:07:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:17.930 00:07:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:17.930 00:07:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.930 00:07:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:17.930 00:07:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:17.930 00:07:47 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:17.930 00:07:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:17.930 00:07:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:17.930 00:07:47 -- common/autotest_common.sh@10 -- # set +x 00:23:17.930 00:07:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:17.930 00:07:47 -- nvmf/common.sh@470 -- # nvmfpid=504061 00:23:17.930 00:07:47 -- nvmf/common.sh@471 -- # waitforlisten 504061 00:23:17.930 00:07:47 -- common/autotest_common.sh@817 -- # '[' -z 504061 ']' 00:23:17.930 00:07:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.930 00:07:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:17.930 00:07:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.930 00:07:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:17.930 00:07:47 -- common/autotest_common.sh@10 -- # set +x 00:23:17.930 [2024-04-27 00:07:47.184395] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:23:17.930 [2024-04-27 00:07:47.184449] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.930 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.930 [2024-04-27 00:07:47.250523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.930 [2024-04-27 00:07:47.315674] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.930 [2024-04-27 00:07:47.315712] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.930 [2024-04-27 00:07:47.315720] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.930 [2024-04-27 00:07:47.315726] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.930 [2024-04-27 00:07:47.315731] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.930 [2024-04-27 00:07:47.315749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.930 00:07:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:17.930 00:07:47 -- common/autotest_common.sh@850 -- # return 0 00:23:17.930 00:07:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:17.930 00:07:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:17.930 00:07:47 -- common/autotest_common.sh@10 -- # set +x 00:23:17.930 00:07:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.930 00:07:47 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:17.930 00:07:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.930 00:07:47 -- common/autotest_common.sh@10 -- # set +x 00:23:17.930 [2024-04-27 00:07:47.993742] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.930 [2024-04-27 00:07:48.001897] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:17.930 null0 00:23:17.930 [2024-04-27 00:07:48.033894] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.930 00:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.930 00:07:48 -- host/discovery_remove_ifc.sh@59 -- # hostpid=504152 00:23:17.930 00:07:48 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 504152 /tmp/host.sock 00:23:17.930 00:07:48 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:17.930 00:07:48 -- common/autotest_common.sh@817 -- # '[' -z 504152 ']' 00:23:17.930 00:07:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:17.930 00:07:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:17.930 00:07:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:17.930 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:17.930 00:07:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:17.930 00:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:17.930 [2024-04-27 00:07:48.102454] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:23:17.930 [2024-04-27 00:07:48.102498] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid504152 ] 00:23:17.930 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.192 [2024-04-27 00:07:48.160938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.192 [2024-04-27 00:07:48.225189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.765 00:07:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:18.765 00:07:48 -- common/autotest_common.sh@850 -- # return 0 00:23:18.765 00:07:48 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.765 00:07:48 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:18.765 00:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.765 00:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:18.765 00:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.765 00:07:48 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:18.765 00:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.765 00:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:18.765 00:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.765 00:07:48 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:18.765 00:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.765 00:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:20.152 [2024-04-27 00:07:50.002806] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:20.152 [2024-04-27 00:07:50.002827] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:20.152 [2024-04-27 00:07:50.002845] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:20.152 [2024-04-27 00:07:50.132266] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:20.152 [2024-04-27 00:07:50.318037] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:20.152 [2024-04-27 00:07:50.318085] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:20.152 [2024-04-27 00:07:50.318107] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:20.152 [2024-04-27 00:07:50.318121] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:20.152 [2024-04-27 00:07:50.318142] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:20.152 00:07:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.153 00:07:50 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:20.153 [2024-04-27 00:07:50.321446] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2431f10 was disconnected and freed. delete nvme_qpair. 00:23:20.153 00:07:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:20.153 00:07:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.153 00:07:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:20.153 00:07:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.153 00:07:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:20.153 00:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:20.153 00:07:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.153 00:07:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.413 00:07:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:20.413 00:07:50 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:20.413 00:07:50 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:20.413 00:07:50 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:20.413 00:07:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:20.413 00:07:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.413 00:07:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:20.413 00:07:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.413 00:07:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:20.413 00:07:50 -- common/autotest_common.sh@10 -- # set +x 00:23:20.413 00:07:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.413 00:07:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.414 00:07:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:20.414 00:07:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:21.356 00:07:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:21.356 00:07:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.356 00:07:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:21.356 00:07:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.356 00:07:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:21.356 00:07:51 -- common/autotest_common.sh@10 -- # set +x 00:23:21.616 00:07:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:21.616 00:07:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.616 00:07:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:21.616 00:07:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:22.557 00:07:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:22.557 00:07:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.557 00:07:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:22.557 00:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.557 00:07:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:22.557 00:07:52 -- common/autotest_common.sh@10 -- # set +x 00:23:22.557 00:07:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:22.557 00:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.557 00:07:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:22.557 00:07:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:23.498 00:07:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:23.498 00:07:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.498 00:07:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:23.498 00:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.498 00:07:53 -- common/autotest_common.sh@10 -- # set +x 00:23:23.498 00:07:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:23.498 00:07:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:23.498 00:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.759 00:07:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:23.759 00:07:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:24.700 00:07:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.700 00:07:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.700 00:07:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.700 00:07:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.700 00:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.700 00:07:54 -- common/autotest_common.sh@10 -- # set +x 00:23:24.700 00:07:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.700 00:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.700 00:07:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:24.700 00:07:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:25.693 [2024-04-27 00:07:55.758737] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:25.693 [2024-04-27 00:07:55.758780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.693 [2024-04-27 00:07:55.758792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.693 [2024-04-27 00:07:55.758802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.693 [2024-04-27 00:07:55.758810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.693 [2024-04-27 00:07:55.758818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.693 [2024-04-27 00:07:55.758825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.693 [2024-04-27 00:07:55.758833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.693 [2024-04-27 00:07:55.758849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.693 [2024-04-27 00:07:55.758858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.693 [2024-04-27 00:07:55.758865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.693 [2024-04-27 00:07:55.758872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f8400 is same with the state(5) to be set 00:23:25.693 [2024-04-27 00:07:55.768756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f8400 (9): Bad file descriptor 00:23:25.693 [2024-04-27 00:07:55.778796] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.693 00:07:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:25.693 00:07:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:25.693 00:07:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.693 00:07:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:25.693 00:07:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.693 00:07:55 -- common/autotest_common.sh@10 -- # set +x 00:23:25.693 00:07:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:26.636 [2024-04-27 00:07:56.809353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:28.022 [2024-04-27 00:07:57.832877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:28.022 [2024-04-27 00:07:57.832922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f8400 with addr=10.0.0.2, port=4420 00:23:28.022 [2024-04-27 00:07:57.832937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f8400 is same with the state(5) to be set 00:23:28.022 [2024-04-27 00:07:57.833314] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f8400 (9): Bad file descriptor 00:23:28.022 [2024-04-27 00:07:57.833339] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.022 [2024-04-27 00:07:57.833357] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:28.022 [2024-04-27 00:07:57.833380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.022 [2024-04-27 00:07:57.833391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.022 [2024-04-27 00:07:57.833403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.022 [2024-04-27 00:07:57.833410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.022 [2024-04-27 00:07:57.833419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.022 [2024-04-27 00:07:57.833426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.022 [2024-04-27 00:07:57.833434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.022 [2024-04-27 00:07:57.833441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.022 [2024-04-27 00:07:57.833449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.022 [2024-04-27 00:07:57.833456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.022 [2024-04-27 00:07:57.833464] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:28.022 [2024-04-27 00:07:57.833962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f8810 (9): Bad file descriptor 00:23:28.022 [2024-04-27 00:07:57.834975] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:28.022 [2024-04-27 00:07:57.834986] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:28.022 00:07:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.022 00:07:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:28.022 00:07:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:28.966 00:07:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.966 00:07:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.966 00:07:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.966 00:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.966 00:07:58 -- common/autotest_common.sh@10 -- # set +x 00:23:28.966 00:07:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.966 00:07:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.966 00:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.966 00:07:58 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:28.966 00:07:58 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.966 00:07:58 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.966 00:07:59 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:28.966 00:07:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.966 00:07:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.966 00:07:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.966 00:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.966 00:07:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.966 00:07:59 -- common/autotest_common.sh@10 -- # set +x 00:23:28.966 00:07:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.966 00:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.966 00:07:59 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:28.966 00:07:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:29.908 [2024-04-27 00:07:59.889045] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.908 [2024-04-27 00:07:59.889065] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.908 [2024-04-27 00:07:59.889079] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.908 [2024-04-27 00:08:00.017946] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:29.908 00:08:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.908 00:08:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.908 00:08:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.908 00:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.908 00:08:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.908 00:08:00 -- common/autotest_common.sh@10 -- # set +x 00:23:29.908 00:08:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.908 00:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.908 [2024-04-27 00:08:00.118901] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:29.908 [2024-04-27 00:08:00.118941] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:29.908 [2024-04-27 00:08:00.118961] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:29.908 [2024-04-27 00:08:00.118975] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:29.908 [2024-04-27 00:08:00.118983] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.908 [2024-04-27 00:08:00.126270] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x243ca80 was disconnected and freed. delete nvme_qpair. 00:23:30.168 00:08:00 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:30.168 00:08:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:31.109 00:08:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.109 00:08:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.109 00:08:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.109 00:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.109 00:08:01 -- common/autotest_common.sh@10 -- # set +x 00:23:31.109 00:08:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.109 00:08:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.109 00:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.109 00:08:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:31.109 00:08:01 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:31.109 00:08:01 -- host/discovery_remove_ifc.sh@90 -- # killprocess 504152 00:23:31.109 00:08:01 -- common/autotest_common.sh@936 -- # '[' -z 504152 ']' 00:23:31.109 00:08:01 -- common/autotest_common.sh@940 -- # kill -0 504152 00:23:31.109 00:08:01 -- common/autotest_common.sh@941 -- # uname 00:23:31.109 00:08:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.109 00:08:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 504152 00:23:31.109 00:08:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:31.109 00:08:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:31.109 00:08:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 504152' 00:23:31.109 killing process with pid 504152 00:23:31.109 00:08:01 -- common/autotest_common.sh@955 -- # kill 504152 00:23:31.109 00:08:01 -- common/autotest_common.sh@960 -- # wait 504152 00:23:31.369 00:08:01 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:31.369 00:08:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:31.370 00:08:01 -- nvmf/common.sh@117 -- # sync 00:23:31.370 00:08:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.370 00:08:01 -- nvmf/common.sh@120 -- # set +e 00:23:31.370 00:08:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.370 00:08:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.370 rmmod nvme_tcp 00:23:31.370 rmmod nvme_fabrics 00:23:31.370 rmmod nvme_keyring 00:23:31.370 00:08:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.370 00:08:01 -- nvmf/common.sh@124 -- # set -e 00:23:31.370 00:08:01 -- nvmf/common.sh@125 -- # return 0 00:23:31.370 00:08:01 -- nvmf/common.sh@478 -- # '[' -n 504061 ']' 00:23:31.370 00:08:01 -- nvmf/common.sh@479 -- # killprocess 504061 00:23:31.370 00:08:01 -- common/autotest_common.sh@936 -- # '[' -z 504061 ']' 00:23:31.370 00:08:01 -- common/autotest_common.sh@940 -- # kill -0 504061 00:23:31.370 00:08:01 -- common/autotest_common.sh@941 -- # uname 00:23:31.370 00:08:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.370 00:08:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 504061 00:23:31.370 00:08:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:31.370 00:08:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:31.370 00:08:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 504061' 00:23:31.370 killing process with pid 504061 00:23:31.370 00:08:01 -- common/autotest_common.sh@955 -- # kill 504061 00:23:31.370 00:08:01 -- common/autotest_common.sh@960 -- # wait 504061 00:23:31.630 00:08:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:31.630 00:08:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:31.630 00:08:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:31.630 00:08:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.630 00:08:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.630 00:08:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.630 00:08:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.630 00:08:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.541 00:08:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.541 00:23:33.541 real 0m24.108s 00:23:33.541 user 0m28.259s 00:23:33.541 sys 0m6.736s 00:23:33.541 00:08:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:33.541 00:08:03 -- common/autotest_common.sh@10 -- # set +x 00:23:33.541 ************************************ 00:23:33.541 END TEST nvmf_discovery_remove_ifc 00:23:33.541 ************************************ 00:23:33.541 00:08:03 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:33.541 00:08:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:33.541 00:08:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:33.541 00:08:03 -- common/autotest_common.sh@10 -- # set +x 00:23:33.801 ************************************ 00:23:33.802 START TEST nvmf_identify_kernel_target 00:23:33.802 ************************************ 00:23:33.802 00:08:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:33.802 * Looking for test storage... 00:23:33.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.802 00:08:04 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.802 00:08:04 -- nvmf/common.sh@7 -- # uname -s 00:23:33.802 00:08:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.802 00:08:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.802 00:08:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.802 00:08:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.802 00:08:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.802 00:08:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.802 00:08:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.062 00:08:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.062 00:08:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.062 00:08:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.062 00:08:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.062 00:08:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.062 00:08:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.062 00:08:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.062 00:08:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.062 00:08:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.062 00:08:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.063 00:08:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.063 00:08:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.063 00:08:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.063 00:08:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.063 00:08:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.063 00:08:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.063 00:08:04 -- paths/export.sh@5 -- # export PATH 00:23:34.063 00:08:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.063 00:08:04 -- nvmf/common.sh@47 -- # : 0 00:23:34.063 00:08:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.063 00:08:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.063 00:08:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.063 00:08:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.063 00:08:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.063 00:08:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.063 00:08:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.063 00:08:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.063 00:08:04 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:34.063 00:08:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:34.063 00:08:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.063 00:08:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:34.063 00:08:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:34.063 00:08:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:34.063 00:08:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.063 00:08:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.063 00:08:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.063 00:08:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:34.063 00:08:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:34.063 00:08:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:34.063 00:08:04 -- common/autotest_common.sh@10 -- # set +x 00:23:42.210 00:08:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:42.210 00:08:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:42.210 00:08:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:42.210 00:08:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:42.210 00:08:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:42.210 00:08:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:42.210 00:08:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:42.210 00:08:10 -- nvmf/common.sh@295 -- # net_devs=() 00:23:42.210 00:08:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:42.210 00:08:10 -- nvmf/common.sh@296 -- # e810=() 00:23:42.210 00:08:10 -- nvmf/common.sh@296 -- # local -ga e810 00:23:42.210 00:08:10 -- nvmf/common.sh@297 -- # x722=() 00:23:42.210 00:08:10 -- nvmf/common.sh@297 -- # local -ga x722 00:23:42.210 00:08:10 -- nvmf/common.sh@298 -- # mlx=() 00:23:42.210 00:08:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:42.210 00:08:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.210 00:08:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.210 00:08:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.210 00:08:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.210 00:08:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.210 00:08:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.210 00:08:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.210 00:08:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.210 00:08:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.210 00:08:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.210 00:08:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.210 00:08:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:42.210 00:08:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:42.210 00:08:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:42.210 00:08:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.210 00:08:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:42.210 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:42.210 00:08:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.210 00:08:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:42.210 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:42.210 00:08:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:42.210 00:08:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.210 00:08:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.210 00:08:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:42.210 00:08:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.210 00:08:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:42.210 Found net devices under 0000:31:00.0: cvl_0_0 00:23:42.210 00:08:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.210 00:08:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.210 00:08:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.210 00:08:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:42.210 00:08:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.210 00:08:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:42.210 Found net devices under 0000:31:00.1: cvl_0_1 00:23:42.210 00:08:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.210 00:08:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:42.210 00:08:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:42.210 00:08:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:42.210 00:08:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:42.210 00:08:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.210 00:08:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.210 00:08:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.210 00:08:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:42.210 00:08:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.210 00:08:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.210 00:08:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:42.210 00:08:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.210 00:08:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.210 00:08:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:42.210 00:08:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:42.210 00:08:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.211 00:08:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.211 00:08:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.211 00:08:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.211 00:08:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:42.211 00:08:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.211 00:08:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.211 00:08:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.211 00:08:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:42.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:23:42.211 00:23:42.211 --- 10.0.0.2 ping statistics --- 00:23:42.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.211 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:23:42.211 00:08:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:23:42.211 00:23:42.211 --- 10.0.0.1 ping statistics --- 00:23:42.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.211 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:23:42.211 00:08:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.211 00:08:11 -- nvmf/common.sh@411 -- # return 0 00:23:42.211 00:08:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:42.211 00:08:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.211 00:08:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:42.211 00:08:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:42.211 00:08:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.211 00:08:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:42.211 00:08:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:42.211 00:08:11 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:42.211 00:08:11 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:42.211 00:08:11 -- nvmf/common.sh@717 -- # local ip 00:23:42.211 00:08:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:42.211 00:08:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:42.211 00:08:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.211 00:08:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.211 00:08:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:42.211 00:08:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.211 00:08:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:42.211 00:08:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:42.211 00:08:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:42.211 00:08:11 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:42.211 00:08:11 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:42.211 00:08:11 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:42.211 00:08:11 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:42.211 00:08:11 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:42.211 00:08:11 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:42.211 00:08:11 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:42.211 00:08:11 -- nvmf/common.sh@628 -- # local block nvme 00:23:42.211 00:08:11 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:42.211 00:08:11 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:42.211 00:08:11 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:42.211 00:08:11 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:44.762 Waiting for block devices as requested 00:23:44.762 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:44.762 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:44.762 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:45.022 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:45.022 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:45.022 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:45.283 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:45.283 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:45.283 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:23:45.545 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:45.545 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:45.545 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:45.806 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:45.806 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:45.806 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:46.066 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:46.066 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:46.326 00:08:16 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:46.326 00:08:16 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:46.326 00:08:16 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:46.326 00:08:16 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:46.326 00:08:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:46.327 00:08:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:46.327 00:08:16 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:46.327 00:08:16 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:46.327 00:08:16 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:46.327 No valid GPT data, bailing 00:23:46.327 00:08:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:46.327 00:08:16 -- scripts/common.sh@391 -- # pt= 00:23:46.327 00:08:16 -- scripts/common.sh@392 -- # return 1 00:23:46.327 00:08:16 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:46.327 00:08:16 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:46.327 00:08:16 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:46.327 00:08:16 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:46.327 00:08:16 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:46.327 00:08:16 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:46.327 00:08:16 -- nvmf/common.sh@656 -- # echo 1 00:23:46.327 00:08:16 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:46.327 00:08:16 -- nvmf/common.sh@658 -- # echo 1 00:23:46.327 00:08:16 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:46.327 00:08:16 -- nvmf/common.sh@661 -- # echo tcp 00:23:46.327 00:08:16 -- nvmf/common.sh@662 -- # echo 4420 00:23:46.327 00:08:16 -- nvmf/common.sh@663 -- # echo ipv4 00:23:46.327 00:08:16 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:46.327 00:08:16 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:23:46.327 00:23:46.327 Discovery Log Number of Records 2, Generation counter 2 00:23:46.327 =====Discovery Log Entry 0====== 00:23:46.327 trtype: tcp 00:23:46.327 adrfam: ipv4 00:23:46.327 subtype: current discovery subsystem 00:23:46.327 treq: not specified, sq flow control disable supported 00:23:46.327 portid: 1 00:23:46.327 trsvcid: 4420 00:23:46.327 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:46.327 traddr: 10.0.0.1 00:23:46.327 eflags: none 00:23:46.327 sectype: none 00:23:46.327 =====Discovery Log Entry 1====== 00:23:46.327 trtype: tcp 00:23:46.327 adrfam: ipv4 00:23:46.327 subtype: nvme subsystem 00:23:46.327 treq: not specified, sq flow control disable supported 00:23:46.327 portid: 1 00:23:46.327 trsvcid: 4420 00:23:46.327 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:46.327 traddr: 10.0.0.1 00:23:46.327 eflags: none 00:23:46.327 sectype: none 00:23:46.327 00:08:16 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:46.327 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:46.589 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.589 ===================================================== 00:23:46.589 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:46.589 ===================================================== 00:23:46.589 Controller Capabilities/Features 00:23:46.589 ================================ 00:23:46.589 Vendor ID: 0000 00:23:46.589 Subsystem Vendor ID: 0000 00:23:46.589 Serial Number: d4108698826a2e51cef4 00:23:46.589 Model Number: Linux 00:23:46.589 Firmware Version: 6.7.0-68 00:23:46.589 Recommended Arb Burst: 0 00:23:46.589 IEEE OUI Identifier: 00 00 00 00:23:46.589 Multi-path I/O 00:23:46.589 May have multiple subsystem ports: No 00:23:46.589 May have multiple controllers: No 00:23:46.589 Associated with SR-IOV VF: No 00:23:46.589 Max Data Transfer Size: Unlimited 00:23:46.589 Max Number of Namespaces: 0 00:23:46.589 Max Number of I/O Queues: 1024 00:23:46.589 NVMe Specification Version (VS): 1.3 00:23:46.589 NVMe Specification Version (Identify): 1.3 00:23:46.589 Maximum Queue Entries: 1024 00:23:46.589 Contiguous Queues Required: No 00:23:46.589 Arbitration Mechanisms Supported 00:23:46.589 Weighted Round Robin: Not Supported 00:23:46.589 Vendor Specific: Not Supported 00:23:46.589 Reset Timeout: 7500 ms 00:23:46.589 Doorbell Stride: 4 bytes 00:23:46.589 NVM Subsystem Reset: Not Supported 00:23:46.589 Command Sets Supported 00:23:46.589 NVM Command Set: Supported 00:23:46.589 Boot Partition: Not Supported 00:23:46.589 Memory Page Size Minimum: 4096 bytes 00:23:46.589 Memory Page Size Maximum: 4096 bytes 00:23:46.589 Persistent Memory Region: Not Supported 00:23:46.589 Optional Asynchronous Events Supported 00:23:46.589 Namespace Attribute Notices: Not Supported 00:23:46.589 Firmware Activation Notices: Not Supported 00:23:46.589 ANA Change Notices: Not Supported 00:23:46.589 PLE Aggregate Log Change Notices: Not Supported 00:23:46.589 LBA Status Info Alert Notices: Not Supported 00:23:46.589 EGE Aggregate Log Change Notices: Not Supported 00:23:46.589 Normal NVM Subsystem Shutdown event: Not Supported 00:23:46.589 Zone Descriptor Change Notices: Not Supported 00:23:46.589 Discovery Log Change Notices: Supported 00:23:46.589 Controller Attributes 00:23:46.589 128-bit Host Identifier: Not Supported 00:23:46.589 Non-Operational Permissive Mode: Not Supported 00:23:46.589 NVM Sets: Not Supported 00:23:46.589 Read Recovery Levels: Not Supported 00:23:46.589 Endurance Groups: Not Supported 00:23:46.589 Predictable Latency Mode: Not Supported 00:23:46.589 Traffic Based Keep ALive: Not Supported 00:23:46.589 Namespace Granularity: Not Supported 00:23:46.589 SQ Associations: Not Supported 00:23:46.589 UUID List: Not Supported 00:23:46.589 Multi-Domain Subsystem: Not Supported 00:23:46.589 Fixed Capacity Management: Not Supported 00:23:46.589 Variable Capacity Management: Not Supported 00:23:46.589 Delete Endurance Group: Not Supported 00:23:46.589 Delete NVM Set: Not Supported 00:23:46.589 Extended LBA Formats Supported: Not Supported 00:23:46.589 Flexible Data Placement Supported: Not Supported 00:23:46.589 00:23:46.589 Controller Memory Buffer Support 00:23:46.589 ================================ 00:23:46.589 Supported: No 00:23:46.589 00:23:46.589 Persistent Memory Region Support 00:23:46.589 ================================ 00:23:46.589 Supported: No 00:23:46.589 00:23:46.589 Admin Command Set Attributes 00:23:46.589 ============================ 00:23:46.589 Security Send/Receive: Not Supported 00:23:46.589 Format NVM: Not Supported 00:23:46.589 Firmware Activate/Download: Not Supported 00:23:46.589 Namespace Management: Not Supported 00:23:46.589 Device Self-Test: Not Supported 00:23:46.589 Directives: Not Supported 00:23:46.589 NVMe-MI: Not Supported 00:23:46.589 Virtualization Management: Not Supported 00:23:46.589 Doorbell Buffer Config: Not Supported 00:23:46.589 Get LBA Status Capability: Not Supported 00:23:46.589 Command & Feature Lockdown Capability: Not Supported 00:23:46.589 Abort Command Limit: 1 00:23:46.589 Async Event Request Limit: 1 00:23:46.589 Number of Firmware Slots: N/A 00:23:46.589 Firmware Slot 1 Read-Only: N/A 00:23:46.589 Firmware Activation Without Reset: N/A 00:23:46.589 Multiple Update Detection Support: N/A 00:23:46.589 Firmware Update Granularity: No Information Provided 00:23:46.589 Per-Namespace SMART Log: No 00:23:46.589 Asymmetric Namespace Access Log Page: Not Supported 00:23:46.589 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:46.589 Command Effects Log Page: Not Supported 00:23:46.589 Get Log Page Extended Data: Supported 00:23:46.589 Telemetry Log Pages: Not Supported 00:23:46.589 Persistent Event Log Pages: Not Supported 00:23:46.589 Supported Log Pages Log Page: May Support 00:23:46.589 Commands Supported & Effects Log Page: Not Supported 00:23:46.589 Feature Identifiers & Effects Log Page:May Support 00:23:46.589 NVMe-MI Commands & Effects Log Page: May Support 00:23:46.589 Data Area 4 for Telemetry Log: Not Supported 00:23:46.589 Error Log Page Entries Supported: 1 00:23:46.589 Keep Alive: Not Supported 00:23:46.589 00:23:46.589 NVM Command Set Attributes 00:23:46.589 ========================== 00:23:46.589 Submission Queue Entry Size 00:23:46.589 Max: 1 00:23:46.589 Min: 1 00:23:46.589 Completion Queue Entry Size 00:23:46.589 Max: 1 00:23:46.589 Min: 1 00:23:46.589 Number of Namespaces: 0 00:23:46.589 Compare Command: Not Supported 00:23:46.589 Write Uncorrectable Command: Not Supported 00:23:46.589 Dataset Management Command: Not Supported 00:23:46.589 Write Zeroes Command: Not Supported 00:23:46.589 Set Features Save Field: Not Supported 00:23:46.589 Reservations: Not Supported 00:23:46.589 Timestamp: Not Supported 00:23:46.589 Copy: Not Supported 00:23:46.589 Volatile Write Cache: Not Present 00:23:46.589 Atomic Write Unit (Normal): 1 00:23:46.589 Atomic Write Unit (PFail): 1 00:23:46.589 Atomic Compare & Write Unit: 1 00:23:46.589 Fused Compare & Write: Not Supported 00:23:46.589 Scatter-Gather List 00:23:46.589 SGL Command Set: Supported 00:23:46.589 SGL Keyed: Not Supported 00:23:46.589 SGL Bit Bucket Descriptor: Not Supported 00:23:46.589 SGL Metadata Pointer: Not Supported 00:23:46.589 Oversized SGL: Not Supported 00:23:46.589 SGL Metadata Address: Not Supported 00:23:46.589 SGL Offset: Supported 00:23:46.589 Transport SGL Data Block: Not Supported 00:23:46.589 Replay Protected Memory Block: Not Supported 00:23:46.589 00:23:46.589 Firmware Slot Information 00:23:46.589 ========================= 00:23:46.589 Active slot: 0 00:23:46.589 00:23:46.589 00:23:46.589 Error Log 00:23:46.589 ========= 00:23:46.589 00:23:46.589 Active Namespaces 00:23:46.589 ================= 00:23:46.589 Discovery Log Page 00:23:46.589 ================== 00:23:46.589 Generation Counter: 2 00:23:46.589 Number of Records: 2 00:23:46.589 Record Format: 0 00:23:46.589 00:23:46.589 Discovery Log Entry 0 00:23:46.589 ---------------------- 00:23:46.589 Transport Type: 3 (TCP) 00:23:46.589 Address Family: 1 (IPv4) 00:23:46.589 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:46.589 Entry Flags: 00:23:46.589 Duplicate Returned Information: 0 00:23:46.589 Explicit Persistent Connection Support for Discovery: 0 00:23:46.589 Transport Requirements: 00:23:46.589 Secure Channel: Not Specified 00:23:46.589 Port ID: 1 (0x0001) 00:23:46.589 Controller ID: 65535 (0xffff) 00:23:46.589 Admin Max SQ Size: 32 00:23:46.589 Transport Service Identifier: 4420 00:23:46.589 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:46.589 Transport Address: 10.0.0.1 00:23:46.589 Discovery Log Entry 1 00:23:46.589 ---------------------- 00:23:46.589 Transport Type: 3 (TCP) 00:23:46.589 Address Family: 1 (IPv4) 00:23:46.589 Subsystem Type: 2 (NVM Subsystem) 00:23:46.589 Entry Flags: 00:23:46.589 Duplicate Returned Information: 0 00:23:46.589 Explicit Persistent Connection Support for Discovery: 0 00:23:46.589 Transport Requirements: 00:23:46.589 Secure Channel: Not Specified 00:23:46.589 Port ID: 1 (0x0001) 00:23:46.590 Controller ID: 65535 (0xffff) 00:23:46.590 Admin Max SQ Size: 32 00:23:46.590 Transport Service Identifier: 4420 00:23:46.590 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:46.590 Transport Address: 10.0.0.1 00:23:46.590 00:08:16 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:46.590 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.590 get_feature(0x01) failed 00:23:46.590 get_feature(0x02) failed 00:23:46.590 get_feature(0x04) failed 00:23:46.590 ===================================================== 00:23:46.590 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:46.590 ===================================================== 00:23:46.590 Controller Capabilities/Features 00:23:46.590 ================================ 00:23:46.590 Vendor ID: 0000 00:23:46.590 Subsystem Vendor ID: 0000 00:23:46.590 Serial Number: 00e1135a20962af15847 00:23:46.590 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:46.590 Firmware Version: 6.7.0-68 00:23:46.590 Recommended Arb Burst: 6 00:23:46.590 IEEE OUI Identifier: 00 00 00 00:23:46.590 Multi-path I/O 00:23:46.590 May have multiple subsystem ports: Yes 00:23:46.590 May have multiple controllers: Yes 00:23:46.590 Associated with SR-IOV VF: No 00:23:46.590 Max Data Transfer Size: Unlimited 00:23:46.590 Max Number of Namespaces: 1024 00:23:46.590 Max Number of I/O Queues: 128 00:23:46.590 NVMe Specification Version (VS): 1.3 00:23:46.590 NVMe Specification Version (Identify): 1.3 00:23:46.590 Maximum Queue Entries: 1024 00:23:46.590 Contiguous Queues Required: No 00:23:46.590 Arbitration Mechanisms Supported 00:23:46.590 Weighted Round Robin: Not Supported 00:23:46.590 Vendor Specific: Not Supported 00:23:46.590 Reset Timeout: 7500 ms 00:23:46.590 Doorbell Stride: 4 bytes 00:23:46.590 NVM Subsystem Reset: Not Supported 00:23:46.590 Command Sets Supported 00:23:46.590 NVM Command Set: Supported 00:23:46.590 Boot Partition: Not Supported 00:23:46.590 Memory Page Size Minimum: 4096 bytes 00:23:46.590 Memory Page Size Maximum: 4096 bytes 00:23:46.590 Persistent Memory Region: Not Supported 00:23:46.590 Optional Asynchronous Events Supported 00:23:46.590 Namespace Attribute Notices: Supported 00:23:46.590 Firmware Activation Notices: Not Supported 00:23:46.590 ANA Change Notices: Supported 00:23:46.590 PLE Aggregate Log Change Notices: Not Supported 00:23:46.590 LBA Status Info Alert Notices: Not Supported 00:23:46.590 EGE Aggregate Log Change Notices: Not Supported 00:23:46.590 Normal NVM Subsystem Shutdown event: Not Supported 00:23:46.590 Zone Descriptor Change Notices: Not Supported 00:23:46.590 Discovery Log Change Notices: Not Supported 00:23:46.590 Controller Attributes 00:23:46.590 128-bit Host Identifier: Supported 00:23:46.590 Non-Operational Permissive Mode: Not Supported 00:23:46.590 NVM Sets: Not Supported 00:23:46.590 Read Recovery Levels: Not Supported 00:23:46.590 Endurance Groups: Not Supported 00:23:46.590 Predictable Latency Mode: Not Supported 00:23:46.590 Traffic Based Keep ALive: Supported 00:23:46.590 Namespace Granularity: Not Supported 00:23:46.590 SQ Associations: Not Supported 00:23:46.590 UUID List: Not Supported 00:23:46.590 Multi-Domain Subsystem: Not Supported 00:23:46.590 Fixed Capacity Management: Not Supported 00:23:46.590 Variable Capacity Management: Not Supported 00:23:46.590 Delete Endurance Group: Not Supported 00:23:46.590 Delete NVM Set: Not Supported 00:23:46.590 Extended LBA Formats Supported: Not Supported 00:23:46.590 Flexible Data Placement Supported: Not Supported 00:23:46.590 00:23:46.590 Controller Memory Buffer Support 00:23:46.590 ================================ 00:23:46.590 Supported: No 00:23:46.590 00:23:46.590 Persistent Memory Region Support 00:23:46.590 ================================ 00:23:46.590 Supported: No 00:23:46.590 00:23:46.590 Admin Command Set Attributes 00:23:46.590 ============================ 00:23:46.590 Security Send/Receive: Not Supported 00:23:46.590 Format NVM: Not Supported 00:23:46.590 Firmware Activate/Download: Not Supported 00:23:46.590 Namespace Management: Not Supported 00:23:46.590 Device Self-Test: Not Supported 00:23:46.590 Directives: Not Supported 00:23:46.590 NVMe-MI: Not Supported 00:23:46.590 Virtualization Management: Not Supported 00:23:46.590 Doorbell Buffer Config: Not Supported 00:23:46.590 Get LBA Status Capability: Not Supported 00:23:46.590 Command & Feature Lockdown Capability: Not Supported 00:23:46.590 Abort Command Limit: 4 00:23:46.590 Async Event Request Limit: 4 00:23:46.590 Number of Firmware Slots: N/A 00:23:46.590 Firmware Slot 1 Read-Only: N/A 00:23:46.590 Firmware Activation Without Reset: N/A 00:23:46.590 Multiple Update Detection Support: N/A 00:23:46.590 Firmware Update Granularity: No Information Provided 00:23:46.590 Per-Namespace SMART Log: Yes 00:23:46.590 Asymmetric Namespace Access Log Page: Supported 00:23:46.590 ANA Transition Time : 10 sec 00:23:46.590 00:23:46.590 Asymmetric Namespace Access Capabilities 00:23:46.590 ANA Optimized State : Supported 00:23:46.590 ANA Non-Optimized State : Supported 00:23:46.590 ANA Inaccessible State : Supported 00:23:46.590 ANA Persistent Loss State : Supported 00:23:46.590 ANA Change State : Supported 00:23:46.590 ANAGRPID is not changed : No 00:23:46.590 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:46.590 00:23:46.590 ANA Group Identifier Maximum : 128 00:23:46.590 Number of ANA Group Identifiers : 128 00:23:46.590 Max Number of Allowed Namespaces : 1024 00:23:46.590 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:46.590 Command Effects Log Page: Supported 00:23:46.590 Get Log Page Extended Data: Supported 00:23:46.590 Telemetry Log Pages: Not Supported 00:23:46.590 Persistent Event Log Pages: Not Supported 00:23:46.590 Supported Log Pages Log Page: May Support 00:23:46.590 Commands Supported & Effects Log Page: Not Supported 00:23:46.590 Feature Identifiers & Effects Log Page:May Support 00:23:46.590 NVMe-MI Commands & Effects Log Page: May Support 00:23:46.590 Data Area 4 for Telemetry Log: Not Supported 00:23:46.590 Error Log Page Entries Supported: 128 00:23:46.590 Keep Alive: Supported 00:23:46.590 Keep Alive Granularity: 1000 ms 00:23:46.590 00:23:46.590 NVM Command Set Attributes 00:23:46.590 ========================== 00:23:46.590 Submission Queue Entry Size 00:23:46.590 Max: 64 00:23:46.590 Min: 64 00:23:46.590 Completion Queue Entry Size 00:23:46.590 Max: 16 00:23:46.590 Min: 16 00:23:46.590 Number of Namespaces: 1024 00:23:46.590 Compare Command: Not Supported 00:23:46.590 Write Uncorrectable Command: Not Supported 00:23:46.590 Dataset Management Command: Supported 00:23:46.590 Write Zeroes Command: Supported 00:23:46.590 Set Features Save Field: Not Supported 00:23:46.590 Reservations: Not Supported 00:23:46.590 Timestamp: Not Supported 00:23:46.590 Copy: Not Supported 00:23:46.590 Volatile Write Cache: Present 00:23:46.590 Atomic Write Unit (Normal): 1 00:23:46.590 Atomic Write Unit (PFail): 1 00:23:46.590 Atomic Compare & Write Unit: 1 00:23:46.590 Fused Compare & Write: Not Supported 00:23:46.590 Scatter-Gather List 00:23:46.590 SGL Command Set: Supported 00:23:46.590 SGL Keyed: Not Supported 00:23:46.590 SGL Bit Bucket Descriptor: Not Supported 00:23:46.590 SGL Metadata Pointer: Not Supported 00:23:46.590 Oversized SGL: Not Supported 00:23:46.590 SGL Metadata Address: Not Supported 00:23:46.590 SGL Offset: Supported 00:23:46.590 Transport SGL Data Block: Not Supported 00:23:46.590 Replay Protected Memory Block: Not Supported 00:23:46.590 00:23:46.590 Firmware Slot Information 00:23:46.590 ========================= 00:23:46.590 Active slot: 0 00:23:46.590 00:23:46.590 Asymmetric Namespace Access 00:23:46.590 =========================== 00:23:46.590 Change Count : 0 00:23:46.590 Number of ANA Group Descriptors : 1 00:23:46.590 ANA Group Descriptor : 0 00:23:46.590 ANA Group ID : 1 00:23:46.590 Number of NSID Values : 1 00:23:46.590 Change Count : 0 00:23:46.590 ANA State : 1 00:23:46.590 Namespace Identifier : 1 00:23:46.590 00:23:46.590 Commands Supported and Effects 00:23:46.590 ============================== 00:23:46.590 Admin Commands 00:23:46.590 -------------- 00:23:46.590 Get Log Page (02h): Supported 00:23:46.590 Identify (06h): Supported 00:23:46.590 Abort (08h): Supported 00:23:46.590 Set Features (09h): Supported 00:23:46.590 Get Features (0Ah): Supported 00:23:46.591 Asynchronous Event Request (0Ch): Supported 00:23:46.591 Keep Alive (18h): Supported 00:23:46.591 I/O Commands 00:23:46.591 ------------ 00:23:46.591 Flush (00h): Supported 00:23:46.591 Write (01h): Supported LBA-Change 00:23:46.591 Read (02h): Supported 00:23:46.591 Write Zeroes (08h): Supported LBA-Change 00:23:46.591 Dataset Management (09h): Supported 00:23:46.591 00:23:46.591 Error Log 00:23:46.591 ========= 00:23:46.591 Entry: 0 00:23:46.591 Error Count: 0x3 00:23:46.591 Submission Queue Id: 0x0 00:23:46.591 Command Id: 0x5 00:23:46.591 Phase Bit: 0 00:23:46.591 Status Code: 0x2 00:23:46.591 Status Code Type: 0x0 00:23:46.591 Do Not Retry: 1 00:23:46.591 Error Location: 0x28 00:23:46.591 LBA: 0x0 00:23:46.591 Namespace: 0x0 00:23:46.591 Vendor Log Page: 0x0 00:23:46.591 ----------- 00:23:46.591 Entry: 1 00:23:46.591 Error Count: 0x2 00:23:46.591 Submission Queue Id: 0x0 00:23:46.591 Command Id: 0x5 00:23:46.591 Phase Bit: 0 00:23:46.591 Status Code: 0x2 00:23:46.591 Status Code Type: 0x0 00:23:46.591 Do Not Retry: 1 00:23:46.591 Error Location: 0x28 00:23:46.591 LBA: 0x0 00:23:46.591 Namespace: 0x0 00:23:46.591 Vendor Log Page: 0x0 00:23:46.591 ----------- 00:23:46.591 Entry: 2 00:23:46.591 Error Count: 0x1 00:23:46.591 Submission Queue Id: 0x0 00:23:46.591 Command Id: 0x4 00:23:46.591 Phase Bit: 0 00:23:46.591 Status Code: 0x2 00:23:46.591 Status Code Type: 0x0 00:23:46.591 Do Not Retry: 1 00:23:46.591 Error Location: 0x28 00:23:46.591 LBA: 0x0 00:23:46.591 Namespace: 0x0 00:23:46.591 Vendor Log Page: 0x0 00:23:46.591 00:23:46.591 Number of Queues 00:23:46.591 ================ 00:23:46.591 Number of I/O Submission Queues: 128 00:23:46.591 Number of I/O Completion Queues: 128 00:23:46.591 00:23:46.591 ZNS Specific Controller Data 00:23:46.591 ============================ 00:23:46.591 Zone Append Size Limit: 0 00:23:46.591 00:23:46.591 00:23:46.591 Active Namespaces 00:23:46.591 ================= 00:23:46.591 get_feature(0x05) failed 00:23:46.591 Namespace ID:1 00:23:46.591 Command Set Identifier: NVM (00h) 00:23:46.591 Deallocate: Supported 00:23:46.591 Deallocated/Unwritten Error: Not Supported 00:23:46.591 Deallocated Read Value: Unknown 00:23:46.591 Deallocate in Write Zeroes: Not Supported 00:23:46.591 Deallocated Guard Field: 0xFFFF 00:23:46.591 Flush: Supported 00:23:46.591 Reservation: Not Supported 00:23:46.591 Namespace Sharing Capabilities: Multiple Controllers 00:23:46.591 Size (in LBAs): 3750748848 (1788GiB) 00:23:46.591 Capacity (in LBAs): 3750748848 (1788GiB) 00:23:46.591 Utilization (in LBAs): 3750748848 (1788GiB) 00:23:46.591 UUID: dae020b8-40c7-4d1e-85ab-6773f50e1264 00:23:46.591 Thin Provisioning: Not Supported 00:23:46.591 Per-NS Atomic Units: Yes 00:23:46.591 Atomic Write Unit (Normal): 8 00:23:46.591 Atomic Write Unit (PFail): 8 00:23:46.591 Preferred Write Granularity: 8 00:23:46.591 Atomic Compare & Write Unit: 8 00:23:46.591 Atomic Boundary Size (Normal): 0 00:23:46.591 Atomic Boundary Size (PFail): 0 00:23:46.591 Atomic Boundary Offset: 0 00:23:46.591 NGUID/EUI64 Never Reused: No 00:23:46.591 ANA group ID: 1 00:23:46.591 Namespace Write Protected: No 00:23:46.591 Number of LBA Formats: 1 00:23:46.591 Current LBA Format: LBA Format #00 00:23:46.591 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:46.591 00:23:46.591 00:08:16 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:46.591 00:08:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:46.591 00:08:16 -- nvmf/common.sh@117 -- # sync 00:23:46.591 00:08:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.591 00:08:16 -- nvmf/common.sh@120 -- # set +e 00:23:46.591 00:08:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.591 00:08:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.591 rmmod nvme_tcp 00:23:46.591 rmmod nvme_fabrics 00:23:46.591 00:08:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.591 00:08:16 -- nvmf/common.sh@124 -- # set -e 00:23:46.591 00:08:16 -- nvmf/common.sh@125 -- # return 0 00:23:46.591 00:08:16 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:23:46.591 00:08:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:46.591 00:08:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:46.591 00:08:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:46.591 00:08:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.591 00:08:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.591 00:08:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.591 00:08:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.591 00:08:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.185 00:08:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.185 00:08:18 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:49.185 00:08:18 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:49.185 00:08:18 -- nvmf/common.sh@675 -- # echo 0 00:23:49.185 00:08:18 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:49.185 00:08:18 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:49.185 00:08:18 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:49.185 00:08:18 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:49.185 00:08:18 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:49.185 00:08:18 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:49.185 00:08:18 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:51.723 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:51.723 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:51.723 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:51.723 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:51.723 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:51.723 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:51.723 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:51.984 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:51.984 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:51.984 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:51.984 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:51.984 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:51.984 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:51.984 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:51.984 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:51.984 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:51.984 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:23:52.244 00:23:52.244 real 0m18.514s 00:23:52.244 user 0m5.040s 00:23:52.244 sys 0m10.429s 00:23:52.244 00:08:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:52.244 00:08:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.244 ************************************ 00:23:52.244 END TEST nvmf_identify_kernel_target 00:23:52.244 ************************************ 00:23:52.244 00:08:22 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:52.244 00:08:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:52.244 00:08:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:52.244 00:08:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.505 ************************************ 00:23:52.505 START TEST nvmf_auth 00:23:52.505 ************************************ 00:23:52.505 00:08:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:52.505 * Looking for test storage... 00:23:52.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.505 00:08:22 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.505 00:08:22 -- nvmf/common.sh@7 -- # uname -s 00:23:52.505 00:08:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.505 00:08:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.505 00:08:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.505 00:08:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.505 00:08:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.505 00:08:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.505 00:08:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.505 00:08:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.505 00:08:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.505 00:08:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.505 00:08:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:52.505 00:08:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:52.505 00:08:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.505 00:08:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.505 00:08:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.505 00:08:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.506 00:08:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.506 00:08:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.506 00:08:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.506 00:08:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.506 00:08:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.506 00:08:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.506 00:08:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.506 00:08:22 -- paths/export.sh@5 -- # export PATH 00:23:52.506 00:08:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.506 00:08:22 -- nvmf/common.sh@47 -- # : 0 00:23:52.506 00:08:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.506 00:08:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.506 00:08:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.506 00:08:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.506 00:08:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.506 00:08:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.506 00:08:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.506 00:08:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.767 00:08:22 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:52.767 00:08:22 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:52.767 00:08:22 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:52.767 00:08:22 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:52.767 00:08:22 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:52.767 00:08:22 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:52.767 00:08:22 -- host/auth.sh@21 -- # keys=() 00:23:52.767 00:08:22 -- host/auth.sh@77 -- # nvmftestinit 00:23:52.767 00:08:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:52.767 00:08:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.767 00:08:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:52.767 00:08:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:52.767 00:08:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:52.767 00:08:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.767 00:08:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.767 00:08:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.767 00:08:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:52.767 00:08:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:52.767 00:08:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.767 00:08:22 -- common/autotest_common.sh@10 -- # set +x 00:24:00.904 00:08:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:00.904 00:08:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.904 00:08:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.904 00:08:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.904 00:08:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.904 00:08:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.904 00:08:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.904 00:08:29 -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.904 00:08:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.904 00:08:29 -- nvmf/common.sh@296 -- # e810=() 00:24:00.904 00:08:29 -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.904 00:08:29 -- nvmf/common.sh@297 -- # x722=() 00:24:00.904 00:08:29 -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.904 00:08:29 -- nvmf/common.sh@298 -- # mlx=() 00:24:00.904 00:08:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.904 00:08:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.904 00:08:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.904 00:08:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.904 00:08:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.904 00:08:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.904 00:08:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.904 00:08:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.904 00:08:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.904 00:08:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.904 00:08:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.904 00:08:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.904 00:08:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.904 00:08:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.904 00:08:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.904 00:08:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.904 00:08:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:00.904 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:00.904 00:08:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.904 00:08:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:00.904 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:00.904 00:08:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.904 00:08:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.904 00:08:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.904 00:08:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.904 00:08:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:00.904 00:08:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.904 00:08:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:00.904 Found net devices under 0000:31:00.0: cvl_0_0 00:24:00.904 00:08:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.904 00:08:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.904 00:08:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.904 00:08:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:00.904 00:08:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.905 00:08:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:00.905 Found net devices under 0000:31:00.1: cvl_0_1 00:24:00.905 00:08:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.905 00:08:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:00.905 00:08:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:00.905 00:08:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:00.905 00:08:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:00.905 00:08:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:00.905 00:08:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.905 00:08:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.905 00:08:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.905 00:08:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.905 00:08:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.905 00:08:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.905 00:08:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.905 00:08:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.905 00:08:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.905 00:08:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.905 00:08:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.905 00:08:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.905 00:08:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.905 00:08:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.905 00:08:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.905 00:08:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.905 00:08:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.905 00:08:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.905 00:08:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.905 00:08:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:24:00.905 00:24:00.905 --- 10.0.0.2 ping statistics --- 00:24:00.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.905 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:24:00.905 00:08:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:24:00.905 00:24:00.905 --- 10.0.0.1 ping statistics --- 00:24:00.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.905 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:24:00.905 00:08:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.905 00:08:30 -- nvmf/common.sh@411 -- # return 0 00:24:00.905 00:08:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:00.905 00:08:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.905 00:08:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:00.905 00:08:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:00.905 00:08:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.905 00:08:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:00.905 00:08:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:00.905 00:08:30 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:24:00.905 00:08:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:00.905 00:08:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:00.905 00:08:30 -- common/autotest_common.sh@10 -- # set +x 00:24:00.905 00:08:30 -- nvmf/common.sh@470 -- # nvmfpid=518816 00:24:00.905 00:08:30 -- nvmf/common.sh@471 -- # waitforlisten 518816 00:24:00.905 00:08:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:00.905 00:08:30 -- common/autotest_common.sh@817 -- # '[' -z 518816 ']' 00:24:00.905 00:08:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.905 00:08:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:00.905 00:08:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.905 00:08:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:00.905 00:08:30 -- common/autotest_common.sh@10 -- # set +x 00:24:00.905 00:08:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:00.905 00:08:30 -- common/autotest_common.sh@850 -- # return 0 00:24:00.905 00:08:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:00.905 00:08:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:00.905 00:08:30 -- common/autotest_common.sh@10 -- # set +x 00:24:00.905 00:08:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.905 00:08:30 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:00.905 00:08:30 -- host/auth.sh@81 -- # gen_key null 32 00:24:00.905 00:08:30 -- host/auth.sh@53 -- # local digest len file key 00:24:00.905 00:08:30 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.905 00:08:30 -- host/auth.sh@54 -- # local -A digests 00:24:00.905 00:08:30 -- host/auth.sh@56 -- # digest=null 00:24:00.905 00:08:30 -- host/auth.sh@56 -- # len=32 00:24:00.905 00:08:30 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:00.905 00:08:30 -- host/auth.sh@57 -- # key=4f856b7e56d75636b40d31a8ac525abe 00:24:00.905 00:08:30 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:00.905 00:08:30 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.eEH 00:24:00.905 00:08:30 -- host/auth.sh@59 -- # format_dhchap_key 4f856b7e56d75636b40d31a8ac525abe 0 00:24:00.905 00:08:30 -- nvmf/common.sh@708 -- # format_key DHHC-1 4f856b7e56d75636b40d31a8ac525abe 0 00:24:00.905 00:08:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:00.905 00:08:30 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:00.905 00:08:30 -- nvmf/common.sh@693 -- # key=4f856b7e56d75636b40d31a8ac525abe 00:24:00.905 00:08:30 -- nvmf/common.sh@693 -- # digest=0 00:24:00.905 00:08:30 -- nvmf/common.sh@694 -- # python - 00:24:00.905 00:08:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.eEH 00:24:00.905 00:08:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.eEH 00:24:00.905 00:08:31 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.eEH 00:24:00.905 00:08:31 -- host/auth.sh@82 -- # gen_key null 48 00:24:00.905 00:08:31 -- host/auth.sh@53 -- # local digest len file key 00:24:00.905 00:08:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.905 00:08:31 -- host/auth.sh@54 -- # local -A digests 00:24:00.905 00:08:31 -- host/auth.sh@56 -- # digest=null 00:24:00.905 00:08:31 -- host/auth.sh@56 -- # len=48 00:24:00.905 00:08:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:00.905 00:08:31 -- host/auth.sh@57 -- # key=49e5ece08d1138b53727bf4f5c60db9344eeb95e1585d03a 00:24:00.905 00:08:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:00.905 00:08:31 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.NdY 00:24:00.905 00:08:31 -- host/auth.sh@59 -- # format_dhchap_key 49e5ece08d1138b53727bf4f5c60db9344eeb95e1585d03a 0 00:24:00.905 00:08:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 49e5ece08d1138b53727bf4f5c60db9344eeb95e1585d03a 0 00:24:00.905 00:08:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:00.905 00:08:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:00.905 00:08:31 -- nvmf/common.sh@693 -- # key=49e5ece08d1138b53727bf4f5c60db9344eeb95e1585d03a 00:24:00.905 00:08:31 -- nvmf/common.sh@693 -- # digest=0 00:24:00.905 00:08:31 -- nvmf/common.sh@694 -- # python - 00:24:00.905 00:08:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.NdY 00:24:00.905 00:08:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.NdY 00:24:00.905 00:08:31 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.NdY 00:24:00.905 00:08:31 -- host/auth.sh@83 -- # gen_key sha256 32 00:24:00.905 00:08:31 -- host/auth.sh@53 -- # local digest len file key 00:24:00.905 00:08:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.905 00:08:31 -- host/auth.sh@54 -- # local -A digests 00:24:00.905 00:08:31 -- host/auth.sh@56 -- # digest=sha256 00:24:00.905 00:08:31 -- host/auth.sh@56 -- # len=32 00:24:00.905 00:08:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:00.905 00:08:31 -- host/auth.sh@57 -- # key=9a077df45b4d7959cf6fee78cf3d6718 00:24:00.905 00:08:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:24:00.905 00:08:31 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.XEF 00:24:00.905 00:08:31 -- host/auth.sh@59 -- # format_dhchap_key 9a077df45b4d7959cf6fee78cf3d6718 1 00:24:00.905 00:08:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 9a077df45b4d7959cf6fee78cf3d6718 1 00:24:00.905 00:08:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:00.905 00:08:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:00.905 00:08:31 -- nvmf/common.sh@693 -- # key=9a077df45b4d7959cf6fee78cf3d6718 00:24:00.905 00:08:31 -- nvmf/common.sh@693 -- # digest=1 00:24:00.905 00:08:31 -- nvmf/common.sh@694 -- # python - 00:24:01.166 00:08:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.XEF 00:24:01.166 00:08:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.XEF 00:24:01.166 00:08:31 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.XEF 00:24:01.166 00:08:31 -- host/auth.sh@84 -- # gen_key sha384 48 00:24:01.166 00:08:31 -- host/auth.sh@53 -- # local digest len file key 00:24:01.166 00:08:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:01.166 00:08:31 -- host/auth.sh@54 -- # local -A digests 00:24:01.166 00:08:31 -- host/auth.sh@56 -- # digest=sha384 00:24:01.166 00:08:31 -- host/auth.sh@56 -- # len=48 00:24:01.166 00:08:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:01.166 00:08:31 -- host/auth.sh@57 -- # key=40a3d1141d7bf5a442902b75f625bcf9c788dfebb2e00c67 00:24:01.166 00:08:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:24:01.166 00:08:31 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.wNu 00:24:01.166 00:08:31 -- host/auth.sh@59 -- # format_dhchap_key 40a3d1141d7bf5a442902b75f625bcf9c788dfebb2e00c67 2 00:24:01.166 00:08:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 40a3d1141d7bf5a442902b75f625bcf9c788dfebb2e00c67 2 00:24:01.166 00:08:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:01.166 00:08:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:01.166 00:08:31 -- nvmf/common.sh@693 -- # key=40a3d1141d7bf5a442902b75f625bcf9c788dfebb2e00c67 00:24:01.166 00:08:31 -- nvmf/common.sh@693 -- # digest=2 00:24:01.166 00:08:31 -- nvmf/common.sh@694 -- # python - 00:24:01.166 00:08:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.wNu 00:24:01.166 00:08:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.wNu 00:24:01.166 00:08:31 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.wNu 00:24:01.166 00:08:31 -- host/auth.sh@85 -- # gen_key sha512 64 00:24:01.166 00:08:31 -- host/auth.sh@53 -- # local digest len file key 00:24:01.166 00:08:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:01.166 00:08:31 -- host/auth.sh@54 -- # local -A digests 00:24:01.166 00:08:31 -- host/auth.sh@56 -- # digest=sha512 00:24:01.166 00:08:31 -- host/auth.sh@56 -- # len=64 00:24:01.166 00:08:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:01.166 00:08:31 -- host/auth.sh@57 -- # key=78cc1034eb7b7ac2f6e9eb19c8a0d7545879884d70f8156f51ac03d812e44ab4 00:24:01.166 00:08:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:24:01.166 00:08:31 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.m2I 00:24:01.166 00:08:31 -- host/auth.sh@59 -- # format_dhchap_key 78cc1034eb7b7ac2f6e9eb19c8a0d7545879884d70f8156f51ac03d812e44ab4 3 00:24:01.166 00:08:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 78cc1034eb7b7ac2f6e9eb19c8a0d7545879884d70f8156f51ac03d812e44ab4 3 00:24:01.166 00:08:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:01.166 00:08:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:01.166 00:08:31 -- nvmf/common.sh@693 -- # key=78cc1034eb7b7ac2f6e9eb19c8a0d7545879884d70f8156f51ac03d812e44ab4 00:24:01.166 00:08:31 -- nvmf/common.sh@693 -- # digest=3 00:24:01.166 00:08:31 -- nvmf/common.sh@694 -- # python - 00:24:01.166 00:08:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.m2I 00:24:01.166 00:08:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.m2I 00:24:01.166 00:08:31 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.m2I 00:24:01.166 00:08:31 -- host/auth.sh@87 -- # waitforlisten 518816 00:24:01.166 00:08:31 -- common/autotest_common.sh@817 -- # '[' -z 518816 ']' 00:24:01.166 00:08:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.166 00:08:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:01.166 00:08:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.166 00:08:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:01.166 00:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 00:08:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:01.426 00:08:31 -- common/autotest_common.sh@850 -- # return 0 00:24:01.426 00:08:31 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:01.426 00:08:31 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eEH 00:24:01.426 00:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.426 00:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.426 00:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.426 00:08:31 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:01.426 00:08:31 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NdY 00:24:01.426 00:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.426 00:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.427 00:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.427 00:08:31 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:01.427 00:08:31 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.XEF 00:24:01.427 00:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.427 00:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.427 00:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.427 00:08:31 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:01.427 00:08:31 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.wNu 00:24:01.427 00:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.427 00:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.427 00:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.427 00:08:31 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:01.427 00:08:31 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.m2I 00:24:01.427 00:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.427 00:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.427 00:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.427 00:08:31 -- host/auth.sh@92 -- # nvmet_auth_init 00:24:01.427 00:08:31 -- host/auth.sh@35 -- # get_main_ns_ip 00:24:01.427 00:08:31 -- nvmf/common.sh@717 -- # local ip 00:24:01.427 00:08:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.427 00:08:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.427 00:08:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.427 00:08:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.427 00:08:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:01.427 00:08:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.427 00:08:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:01.427 00:08:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:01.427 00:08:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:01.427 00:08:31 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:01.427 00:08:31 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:01.427 00:08:31 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:01.427 00:08:31 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:01.427 00:08:31 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:01.427 00:08:31 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:01.427 00:08:31 -- nvmf/common.sh@628 -- # local block nvme 00:24:01.427 00:08:31 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:01.427 00:08:31 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:01.427 00:08:31 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:01.427 00:08:31 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:04.821 Waiting for block devices as requested 00:24:04.821 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:04.821 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:04.821 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:04.821 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:04.821 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:04.821 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:04.821 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:05.081 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:05.081 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:05.341 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:05.341 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:05.341 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:05.602 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:05.602 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:05.602 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:05.602 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:05.862 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:06.803 00:08:36 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:06.803 00:08:36 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:06.803 00:08:36 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:06.803 00:08:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:06.803 00:08:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:06.803 00:08:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:06.803 00:08:36 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:06.803 00:08:36 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:06.803 00:08:36 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:06.803 No valid GPT data, bailing 00:24:06.803 00:08:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:06.803 00:08:36 -- scripts/common.sh@391 -- # pt= 00:24:06.803 00:08:36 -- scripts/common.sh@392 -- # return 1 00:24:06.803 00:08:36 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:06.803 00:08:36 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:06.803 00:08:36 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:06.803 00:08:36 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:06.803 00:08:36 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:06.803 00:08:36 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:06.803 00:08:36 -- nvmf/common.sh@656 -- # echo 1 00:24:06.803 00:08:36 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:06.803 00:08:36 -- nvmf/common.sh@658 -- # echo 1 00:24:06.803 00:08:36 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:06.803 00:08:36 -- nvmf/common.sh@661 -- # echo tcp 00:24:06.803 00:08:36 -- nvmf/common.sh@662 -- # echo 4420 00:24:06.803 00:08:36 -- nvmf/common.sh@663 -- # echo ipv4 00:24:06.803 00:08:36 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:06.803 00:08:36 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:24:06.803 00:24:06.803 Discovery Log Number of Records 2, Generation counter 2 00:24:06.803 =====Discovery Log Entry 0====== 00:24:06.803 trtype: tcp 00:24:06.803 adrfam: ipv4 00:24:06.803 subtype: current discovery subsystem 00:24:06.803 treq: not specified, sq flow control disable supported 00:24:06.803 portid: 1 00:24:06.803 trsvcid: 4420 00:24:06.803 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:06.803 traddr: 10.0.0.1 00:24:06.803 eflags: none 00:24:06.803 sectype: none 00:24:06.803 =====Discovery Log Entry 1====== 00:24:06.803 trtype: tcp 00:24:06.803 adrfam: ipv4 00:24:06.803 subtype: nvme subsystem 00:24:06.803 treq: not specified, sq flow control disable supported 00:24:06.803 portid: 1 00:24:06.803 trsvcid: 4420 00:24:06.803 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:06.803 traddr: 10.0.0.1 00:24:06.803 eflags: none 00:24:06.803 sectype: none 00:24:06.803 00:08:36 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:06.803 00:08:36 -- host/auth.sh@37 -- # echo 0 00:24:06.803 00:08:36 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:06.803 00:08:36 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:06.803 00:08:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:06.803 00:08:36 -- host/auth.sh@44 -- # digest=sha256 00:24:06.803 00:08:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.803 00:08:36 -- host/auth.sh@44 -- # keyid=1 00:24:06.803 00:08:36 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:06.804 00:08:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:06.804 00:08:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:06.804 00:08:36 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:06.804 00:08:36 -- host/auth.sh@100 -- # IFS=, 00:24:06.804 00:08:36 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:24:06.804 00:08:36 -- host/auth.sh@100 -- # IFS=, 00:24:06.804 00:08:36 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.804 00:08:36 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:06.804 00:08:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.804 00:08:36 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:24:06.804 00:08:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.804 00:08:36 -- host/auth.sh@68 -- # keyid=1 00:24:06.804 00:08:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.804 00:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.804 00:08:36 -- common/autotest_common.sh@10 -- # set +x 00:24:06.804 00:08:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.804 00:08:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.804 00:08:36 -- nvmf/common.sh@717 -- # local ip 00:24:06.804 00:08:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.804 00:08:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.804 00:08:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.804 00:08:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.804 00:08:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.804 00:08:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.804 00:08:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.804 00:08:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.804 00:08:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.804 00:08:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:06.804 00:08:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.804 00:08:36 -- common/autotest_common.sh@10 -- # set +x 00:24:07.065 nvme0n1 00:24:07.065 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.065 00:08:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.065 00:08:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.065 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.065 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.065 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.065 00:08:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.065 00:08:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.065 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.065 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.065 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.065 00:08:37 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:07.065 00:08:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:07.065 00:08:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.065 00:08:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:07.065 00:08:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.065 00:08:37 -- host/auth.sh@44 -- # digest=sha256 00:24:07.065 00:08:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.065 00:08:37 -- host/auth.sh@44 -- # keyid=0 00:24:07.065 00:08:37 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:07.065 00:08:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.065 00:08:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:07.065 00:08:37 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:07.065 00:08:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:24:07.065 00:08:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.065 00:08:37 -- host/auth.sh@68 -- # digest=sha256 00:24:07.065 00:08:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:07.065 00:08:37 -- host/auth.sh@68 -- # keyid=0 00:24:07.065 00:08:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.065 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.065 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.065 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.065 00:08:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.065 00:08:37 -- nvmf/common.sh@717 -- # local ip 00:24:07.065 00:08:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.065 00:08:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.065 00:08:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.065 00:08:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.065 00:08:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.065 00:08:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.065 00:08:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.065 00:08:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.065 00:08:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.065 00:08:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:07.065 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.065 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.326 nvme0n1 00:24:07.326 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.326 00:08:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.326 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.326 00:08:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.326 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.326 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.326 00:08:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.326 00:08:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.326 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.326 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.326 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.326 00:08:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.326 00:08:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:07.326 00:08:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.326 00:08:37 -- host/auth.sh@44 -- # digest=sha256 00:24:07.326 00:08:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.326 00:08:37 -- host/auth.sh@44 -- # keyid=1 00:24:07.326 00:08:37 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:07.326 00:08:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.326 00:08:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:07.326 00:08:37 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:07.326 00:08:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:24:07.326 00:08:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.326 00:08:37 -- host/auth.sh@68 -- # digest=sha256 00:24:07.326 00:08:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:07.326 00:08:37 -- host/auth.sh@68 -- # keyid=1 00:24:07.326 00:08:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.326 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.326 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.326 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.326 00:08:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.326 00:08:37 -- nvmf/common.sh@717 -- # local ip 00:24:07.326 00:08:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.326 00:08:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.326 00:08:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.326 00:08:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.326 00:08:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.326 00:08:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.326 00:08:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.326 00:08:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.326 00:08:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.326 00:08:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:07.326 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.326 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.326 nvme0n1 00:24:07.326 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.326 00:08:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.326 00:08:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.326 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.326 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.326 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.587 00:08:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.587 00:08:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.587 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.587 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.587 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.587 00:08:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.587 00:08:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:07.587 00:08:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.587 00:08:37 -- host/auth.sh@44 -- # digest=sha256 00:24:07.587 00:08:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.587 00:08:37 -- host/auth.sh@44 -- # keyid=2 00:24:07.587 00:08:37 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:07.587 00:08:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.587 00:08:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:07.587 00:08:37 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:07.587 00:08:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:24:07.587 00:08:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.587 00:08:37 -- host/auth.sh@68 -- # digest=sha256 00:24:07.587 00:08:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:07.587 00:08:37 -- host/auth.sh@68 -- # keyid=2 00:24:07.587 00:08:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.588 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.588 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.588 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.588 00:08:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.588 00:08:37 -- nvmf/common.sh@717 -- # local ip 00:24:07.588 00:08:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.588 00:08:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.588 00:08:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.588 00:08:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.588 00:08:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.588 00:08:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.588 00:08:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.588 00:08:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.588 00:08:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.588 00:08:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:07.588 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.588 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.588 nvme0n1 00:24:07.588 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.588 00:08:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.588 00:08:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.588 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.588 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.588 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.588 00:08:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.588 00:08:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.588 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.588 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.848 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.848 00:08:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.848 00:08:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:07.848 00:08:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.848 00:08:37 -- host/auth.sh@44 -- # digest=sha256 00:24:07.848 00:08:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.848 00:08:37 -- host/auth.sh@44 -- # keyid=3 00:24:07.848 00:08:37 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:07.848 00:08:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.848 00:08:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:07.848 00:08:37 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:07.848 00:08:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:24:07.849 00:08:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.849 00:08:37 -- host/auth.sh@68 -- # digest=sha256 00:24:07.849 00:08:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:07.849 00:08:37 -- host/auth.sh@68 -- # keyid=3 00:24:07.849 00:08:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.849 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.849 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.849 00:08:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.849 00:08:37 -- nvmf/common.sh@717 -- # local ip 00:24:07.849 00:08:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.849 00:08:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.849 00:08:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.849 00:08:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.849 00:08:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.849 00:08:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.849 00:08:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.849 00:08:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.849 00:08:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.849 00:08:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:07.849 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.849 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 nvme0n1 00:24:07.849 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.849 00:08:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.849 00:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.849 00:08:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.849 00:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 00:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.849 00:08:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.849 00:08:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.849 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.849 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.849 00:08:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.849 00:08:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:07.849 00:08:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.849 00:08:38 -- host/auth.sh@44 -- # digest=sha256 00:24:07.849 00:08:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.849 00:08:38 -- host/auth.sh@44 -- # keyid=4 00:24:07.849 00:08:38 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:07.849 00:08:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.849 00:08:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:07.849 00:08:38 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:07.849 00:08:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:24:07.849 00:08:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.849 00:08:38 -- host/auth.sh@68 -- # digest=sha256 00:24:07.849 00:08:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:07.849 00:08:38 -- host/auth.sh@68 -- # keyid=4 00:24:07.849 00:08:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.849 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.849 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.849 00:08:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.849 00:08:38 -- nvmf/common.sh@717 -- # local ip 00:24:07.849 00:08:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.849 00:08:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.849 00:08:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.849 00:08:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.849 00:08:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.849 00:08:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.849 00:08:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.849 00:08:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.849 00:08:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.849 00:08:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:07.849 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.849 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.109 nvme0n1 00:24:08.109 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.109 00:08:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.109 00:08:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.109 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.109 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.109 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.109 00:08:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.109 00:08:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.109 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.109 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.109 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.109 00:08:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.109 00:08:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.109 00:08:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:08.109 00:08:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.109 00:08:38 -- host/auth.sh@44 -- # digest=sha256 00:24:08.109 00:08:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.109 00:08:38 -- host/auth.sh@44 -- # keyid=0 00:24:08.110 00:08:38 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:08.110 00:08:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:08.110 00:08:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:08.110 00:08:38 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:08.110 00:08:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:24:08.110 00:08:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.110 00:08:38 -- host/auth.sh@68 -- # digest=sha256 00:24:08.110 00:08:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:08.110 00:08:38 -- host/auth.sh@68 -- # keyid=0 00:24:08.110 00:08:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.110 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.110 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.110 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.110 00:08:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.110 00:08:38 -- nvmf/common.sh@717 -- # local ip 00:24:08.110 00:08:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.110 00:08:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.110 00:08:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.110 00:08:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.110 00:08:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.110 00:08:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.110 00:08:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.110 00:08:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.110 00:08:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.110 00:08:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:08.110 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.110 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.370 nvme0n1 00:24:08.370 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.370 00:08:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.370 00:08:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.370 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.370 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.370 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.370 00:08:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.370 00:08:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.370 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.370 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.370 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.370 00:08:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.370 00:08:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:08.370 00:08:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.370 00:08:38 -- host/auth.sh@44 -- # digest=sha256 00:24:08.370 00:08:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.370 00:08:38 -- host/auth.sh@44 -- # keyid=1 00:24:08.370 00:08:38 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:08.370 00:08:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:08.370 00:08:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:08.370 00:08:38 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:08.370 00:08:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:24:08.370 00:08:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.370 00:08:38 -- host/auth.sh@68 -- # digest=sha256 00:24:08.370 00:08:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:08.370 00:08:38 -- host/auth.sh@68 -- # keyid=1 00:24:08.370 00:08:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.370 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.370 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.370 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.370 00:08:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.370 00:08:38 -- nvmf/common.sh@717 -- # local ip 00:24:08.370 00:08:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.370 00:08:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.370 00:08:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.370 00:08:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.370 00:08:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.371 00:08:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.371 00:08:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.371 00:08:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.371 00:08:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.371 00:08:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:08.371 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.371 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.631 nvme0n1 00:24:08.631 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.631 00:08:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.631 00:08:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.631 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.631 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.631 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.631 00:08:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.631 00:08:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.631 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.631 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.631 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.631 00:08:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.631 00:08:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:08.631 00:08:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.631 00:08:38 -- host/auth.sh@44 -- # digest=sha256 00:24:08.631 00:08:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.631 00:08:38 -- host/auth.sh@44 -- # keyid=2 00:24:08.631 00:08:38 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:08.631 00:08:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:08.631 00:08:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:08.631 00:08:38 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:08.631 00:08:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:24:08.631 00:08:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.631 00:08:38 -- host/auth.sh@68 -- # digest=sha256 00:24:08.631 00:08:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:08.631 00:08:38 -- host/auth.sh@68 -- # keyid=2 00:24:08.631 00:08:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.631 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.631 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.631 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.631 00:08:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.631 00:08:38 -- nvmf/common.sh@717 -- # local ip 00:24:08.631 00:08:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.631 00:08:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.631 00:08:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.631 00:08:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.631 00:08:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.631 00:08:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.631 00:08:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.631 00:08:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.631 00:08:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.631 00:08:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:08.631 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.631 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.892 nvme0n1 00:24:08.892 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.892 00:08:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.892 00:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.892 00:08:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.892 00:08:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.892 00:08:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.892 00:08:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.892 00:08:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.892 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.892 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:08.892 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.892 00:08:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.892 00:08:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:08.892 00:08:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.892 00:08:39 -- host/auth.sh@44 -- # digest=sha256 00:24:08.892 00:08:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.892 00:08:39 -- host/auth.sh@44 -- # keyid=3 00:24:08.892 00:08:39 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:08.892 00:08:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:08.892 00:08:39 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:08.892 00:08:39 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:08.892 00:08:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:24:08.892 00:08:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.892 00:08:39 -- host/auth.sh@68 -- # digest=sha256 00:24:08.892 00:08:39 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:08.892 00:08:39 -- host/auth.sh@68 -- # keyid=3 00:24:08.892 00:08:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.892 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.892 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:08.892 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.892 00:08:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.892 00:08:39 -- nvmf/common.sh@717 -- # local ip 00:24:08.892 00:08:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.892 00:08:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.892 00:08:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.892 00:08:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.892 00:08:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.892 00:08:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.892 00:08:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.892 00:08:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.892 00:08:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.892 00:08:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:08.892 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.892 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 nvme0n1 00:24:09.153 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.153 00:08:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.153 00:08:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:09.153 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.153 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.153 00:08:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.153 00:08:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.153 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.153 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.153 00:08:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:09.153 00:08:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:09.153 00:08:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:09.153 00:08:39 -- host/auth.sh@44 -- # digest=sha256 00:24:09.153 00:08:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:09.153 00:08:39 -- host/auth.sh@44 -- # keyid=4 00:24:09.153 00:08:39 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:09.153 00:08:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:09.153 00:08:39 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:09.153 00:08:39 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:09.153 00:08:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:24:09.153 00:08:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:09.153 00:08:39 -- host/auth.sh@68 -- # digest=sha256 00:24:09.153 00:08:39 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:09.153 00:08:39 -- host/auth.sh@68 -- # keyid=4 00:24:09.153 00:08:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:09.153 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.153 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.153 00:08:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:09.153 00:08:39 -- nvmf/common.sh@717 -- # local ip 00:24:09.153 00:08:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:09.153 00:08:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:09.153 00:08:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.153 00:08:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.153 00:08:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:09.153 00:08:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.153 00:08:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:09.153 00:08:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:09.153 00:08:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:09.153 00:08:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:09.153 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.153 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.414 nvme0n1 00:24:09.414 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.414 00:08:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.414 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.414 00:08:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:09.414 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.414 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.414 00:08:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.414 00:08:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.414 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.414 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.414 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.414 00:08:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.414 00:08:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:09.414 00:08:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:09.414 00:08:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:09.414 00:08:39 -- host/auth.sh@44 -- # digest=sha256 00:24:09.414 00:08:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.414 00:08:39 -- host/auth.sh@44 -- # keyid=0 00:24:09.414 00:08:39 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:09.414 00:08:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:09.414 00:08:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:09.414 00:08:39 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:09.414 00:08:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:24:09.414 00:08:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:09.414 00:08:39 -- host/auth.sh@68 -- # digest=sha256 00:24:09.414 00:08:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:09.414 00:08:39 -- host/auth.sh@68 -- # keyid=0 00:24:09.414 00:08:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.414 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.414 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.414 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.414 00:08:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:09.414 00:08:39 -- nvmf/common.sh@717 -- # local ip 00:24:09.414 00:08:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:09.414 00:08:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:09.414 00:08:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.414 00:08:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.414 00:08:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:09.414 00:08:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.414 00:08:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:09.414 00:08:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:09.414 00:08:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:09.414 00:08:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:09.414 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.414 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.675 nvme0n1 00:24:09.675 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.675 00:08:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.675 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.675 00:08:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:09.675 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.675 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.937 00:08:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.937 00:08:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.937 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.937 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.937 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.937 00:08:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:09.937 00:08:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:09.937 00:08:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:09.937 00:08:39 -- host/auth.sh@44 -- # digest=sha256 00:24:09.937 00:08:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.937 00:08:39 -- host/auth.sh@44 -- # keyid=1 00:24:09.937 00:08:39 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:09.937 00:08:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:09.937 00:08:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:09.937 00:08:39 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:09.937 00:08:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:24:09.937 00:08:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:09.937 00:08:39 -- host/auth.sh@68 -- # digest=sha256 00:24:09.937 00:08:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:09.937 00:08:39 -- host/auth.sh@68 -- # keyid=1 00:24:09.937 00:08:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.937 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.937 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:09.937 00:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.937 00:08:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:09.937 00:08:39 -- nvmf/common.sh@717 -- # local ip 00:24:09.937 00:08:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:09.937 00:08:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:09.937 00:08:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.937 00:08:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.937 00:08:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:09.937 00:08:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.937 00:08:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:09.937 00:08:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:09.937 00:08:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:09.937 00:08:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:09.937 00:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.937 00:08:39 -- common/autotest_common.sh@10 -- # set +x 00:24:10.198 nvme0n1 00:24:10.198 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.198 00:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.198 00:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:10.198 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.198 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.198 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.198 00:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.198 00:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.199 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.199 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.199 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.199 00:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:10.199 00:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:10.199 00:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:10.199 00:08:40 -- host/auth.sh@44 -- # digest=sha256 00:24:10.199 00:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.199 00:08:40 -- host/auth.sh@44 -- # keyid=2 00:24:10.199 00:08:40 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:10.199 00:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:10.199 00:08:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:10.199 00:08:40 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:10.199 00:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:24:10.199 00:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:10.199 00:08:40 -- host/auth.sh@68 -- # digest=sha256 00:24:10.199 00:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:10.199 00:08:40 -- host/auth.sh@68 -- # keyid=2 00:24:10.199 00:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.199 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.199 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.199 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.199 00:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:10.199 00:08:40 -- nvmf/common.sh@717 -- # local ip 00:24:10.199 00:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:10.199 00:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:10.199 00:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.199 00:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.199 00:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:10.199 00:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.199 00:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:10.199 00:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:10.199 00:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:10.199 00:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:10.199 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.199 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.460 nvme0n1 00:24:10.460 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.460 00:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.460 00:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:10.460 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.460 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.460 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.460 00:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.460 00:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.460 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.460 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.460 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.460 00:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:10.460 00:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:10.460 00:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:10.460 00:08:40 -- host/auth.sh@44 -- # digest=sha256 00:24:10.460 00:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.460 00:08:40 -- host/auth.sh@44 -- # keyid=3 00:24:10.460 00:08:40 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:10.460 00:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:10.460 00:08:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:10.460 00:08:40 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:10.460 00:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:24:10.460 00:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:10.460 00:08:40 -- host/auth.sh@68 -- # digest=sha256 00:24:10.460 00:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:10.460 00:08:40 -- host/auth.sh@68 -- # keyid=3 00:24:10.460 00:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.460 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.460 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.460 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.460 00:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:10.460 00:08:40 -- nvmf/common.sh@717 -- # local ip 00:24:10.460 00:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:10.460 00:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:10.460 00:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.460 00:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.460 00:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:10.460 00:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.460 00:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:10.460 00:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:10.460 00:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:10.460 00:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:10.460 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.460 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.722 nvme0n1 00:24:10.722 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.722 00:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.722 00:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:10.722 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.722 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.722 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.984 00:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.984 00:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.984 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.984 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.984 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.984 00:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:10.984 00:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:10.984 00:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:10.984 00:08:40 -- host/auth.sh@44 -- # digest=sha256 00:24:10.984 00:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.984 00:08:40 -- host/auth.sh@44 -- # keyid=4 00:24:10.984 00:08:40 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:10.984 00:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:10.984 00:08:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:10.984 00:08:40 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:10.984 00:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:24:10.984 00:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:10.984 00:08:40 -- host/auth.sh@68 -- # digest=sha256 00:24:10.984 00:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:10.984 00:08:40 -- host/auth.sh@68 -- # keyid=4 00:24:10.984 00:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.984 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.984 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.984 00:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.984 00:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:10.984 00:08:40 -- nvmf/common.sh@717 -- # local ip 00:24:10.984 00:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:10.984 00:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:10.984 00:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.984 00:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.984 00:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:10.984 00:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.984 00:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:10.984 00:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:10.984 00:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:10.984 00:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.984 00:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.984 00:08:40 -- common/autotest_common.sh@10 -- # set +x 00:24:11.244 nvme0n1 00:24:11.244 00:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.244 00:08:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.244 00:08:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:11.245 00:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.245 00:08:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.245 00:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.245 00:08:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.245 00:08:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.245 00:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.245 00:08:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.245 00:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.245 00:08:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.245 00:08:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:11.245 00:08:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:11.245 00:08:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:11.245 00:08:41 -- host/auth.sh@44 -- # digest=sha256 00:24:11.245 00:08:41 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:11.245 00:08:41 -- host/auth.sh@44 -- # keyid=0 00:24:11.245 00:08:41 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:11.245 00:08:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:11.245 00:08:41 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:11.245 00:08:41 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:11.245 00:08:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:24:11.245 00:08:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:11.245 00:08:41 -- host/auth.sh@68 -- # digest=sha256 00:24:11.245 00:08:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:11.245 00:08:41 -- host/auth.sh@68 -- # keyid=0 00:24:11.245 00:08:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:11.245 00:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.245 00:08:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.245 00:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.245 00:08:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:11.245 00:08:41 -- nvmf/common.sh@717 -- # local ip 00:24:11.245 00:08:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:11.245 00:08:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:11.245 00:08:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.245 00:08:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.245 00:08:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:11.245 00:08:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.245 00:08:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:11.245 00:08:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:11.245 00:08:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:11.245 00:08:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:11.245 00:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.245 00:08:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.818 nvme0n1 00:24:11.818 00:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.818 00:08:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.818 00:08:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:11.818 00:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.818 00:08:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.818 00:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.818 00:08:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.818 00:08:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.818 00:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.818 00:08:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.818 00:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.818 00:08:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:11.818 00:08:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:11.818 00:08:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:11.818 00:08:41 -- host/auth.sh@44 -- # digest=sha256 00:24:11.818 00:08:41 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:11.818 00:08:41 -- host/auth.sh@44 -- # keyid=1 00:24:11.818 00:08:41 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:11.818 00:08:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:11.818 00:08:41 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:11.818 00:08:41 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:11.818 00:08:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:24:11.818 00:08:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:11.818 00:08:41 -- host/auth.sh@68 -- # digest=sha256 00:24:11.818 00:08:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:11.818 00:08:41 -- host/auth.sh@68 -- # keyid=1 00:24:11.818 00:08:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:11.818 00:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.818 00:08:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.818 00:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.818 00:08:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:11.818 00:08:41 -- nvmf/common.sh@717 -- # local ip 00:24:11.818 00:08:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:11.819 00:08:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:11.819 00:08:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.819 00:08:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.819 00:08:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:11.819 00:08:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.819 00:08:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:11.819 00:08:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:11.819 00:08:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:11.819 00:08:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:11.819 00:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.819 00:08:41 -- common/autotest_common.sh@10 -- # set +x 00:24:12.391 nvme0n1 00:24:12.391 00:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.391 00:08:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.391 00:08:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:12.391 00:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.391 00:08:42 -- common/autotest_common.sh@10 -- # set +x 00:24:12.391 00:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.391 00:08:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.391 00:08:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.391 00:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.391 00:08:42 -- common/autotest_common.sh@10 -- # set +x 00:24:12.391 00:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.391 00:08:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:12.391 00:08:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:12.391 00:08:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:12.391 00:08:42 -- host/auth.sh@44 -- # digest=sha256 00:24:12.391 00:08:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:12.391 00:08:42 -- host/auth.sh@44 -- # keyid=2 00:24:12.391 00:08:42 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:12.391 00:08:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:12.391 00:08:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:12.391 00:08:42 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:12.391 00:08:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:24:12.391 00:08:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:12.391 00:08:42 -- host/auth.sh@68 -- # digest=sha256 00:24:12.391 00:08:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:12.391 00:08:42 -- host/auth.sh@68 -- # keyid=2 00:24:12.391 00:08:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:12.391 00:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.391 00:08:42 -- common/autotest_common.sh@10 -- # set +x 00:24:12.391 00:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.391 00:08:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:12.391 00:08:42 -- nvmf/common.sh@717 -- # local ip 00:24:12.391 00:08:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:12.391 00:08:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:12.391 00:08:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.391 00:08:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.391 00:08:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:12.391 00:08:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.391 00:08:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:12.391 00:08:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:12.391 00:08:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:12.391 00:08:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:12.391 00:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.391 00:08:42 -- common/autotest_common.sh@10 -- # set +x 00:24:12.964 nvme0n1 00:24:12.964 00:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.964 00:08:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.964 00:08:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:12.964 00:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.964 00:08:42 -- common/autotest_common.sh@10 -- # set +x 00:24:12.964 00:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.964 00:08:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.964 00:08:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.964 00:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.964 00:08:42 -- common/autotest_common.sh@10 -- # set +x 00:24:12.964 00:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.964 00:08:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:12.964 00:08:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:12.964 00:08:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:12.964 00:08:42 -- host/auth.sh@44 -- # digest=sha256 00:24:12.964 00:08:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:12.964 00:08:42 -- host/auth.sh@44 -- # keyid=3 00:24:12.964 00:08:42 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:12.964 00:08:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:12.964 00:08:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:12.964 00:08:42 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:12.964 00:08:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:24:12.964 00:08:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:12.964 00:08:42 -- host/auth.sh@68 -- # digest=sha256 00:24:12.964 00:08:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:12.964 00:08:42 -- host/auth.sh@68 -- # keyid=3 00:24:12.964 00:08:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:12.964 00:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.964 00:08:42 -- common/autotest_common.sh@10 -- # set +x 00:24:12.964 00:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.964 00:08:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:12.964 00:08:42 -- nvmf/common.sh@717 -- # local ip 00:24:12.964 00:08:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:12.964 00:08:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:12.964 00:08:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.964 00:08:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.964 00:08:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:12.964 00:08:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.964 00:08:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:12.964 00:08:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:12.964 00:08:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:12.964 00:08:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:12.964 00:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.964 00:08:42 -- common/autotest_common.sh@10 -- # set +x 00:24:13.225 nvme0n1 00:24:13.225 00:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.225 00:08:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.225 00:08:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.225 00:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.225 00:08:43 -- common/autotest_common.sh@10 -- # set +x 00:24:13.225 00:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.225 00:08:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.225 00:08:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.225 00:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.225 00:08:43 -- common/autotest_common.sh@10 -- # set +x 00:24:13.486 00:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.486 00:08:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:13.486 00:08:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:13.486 00:08:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:13.486 00:08:43 -- host/auth.sh@44 -- # digest=sha256 00:24:13.486 00:08:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:13.486 00:08:43 -- host/auth.sh@44 -- # keyid=4 00:24:13.486 00:08:43 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:13.486 00:08:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:13.486 00:08:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:13.486 00:08:43 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:13.486 00:08:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:24:13.486 00:08:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:13.486 00:08:43 -- host/auth.sh@68 -- # digest=sha256 00:24:13.486 00:08:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:13.486 00:08:43 -- host/auth.sh@68 -- # keyid=4 00:24:13.486 00:08:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:13.486 00:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.486 00:08:43 -- common/autotest_common.sh@10 -- # set +x 00:24:13.486 00:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.486 00:08:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:13.486 00:08:43 -- nvmf/common.sh@717 -- # local ip 00:24:13.486 00:08:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:13.486 00:08:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:13.486 00:08:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.486 00:08:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.486 00:08:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:13.486 00:08:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.486 00:08:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:13.487 00:08:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:13.487 00:08:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:13.487 00:08:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.487 00:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.487 00:08:43 -- common/autotest_common.sh@10 -- # set +x 00:24:13.748 nvme0n1 00:24:13.748 00:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.748 00:08:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.748 00:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.748 00:08:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.748 00:08:43 -- common/autotest_common.sh@10 -- # set +x 00:24:13.748 00:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.009 00:08:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.009 00:08:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.009 00:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.009 00:08:43 -- common/autotest_common.sh@10 -- # set +x 00:24:14.009 00:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.009 00:08:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.009 00:08:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.009 00:08:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:14.009 00:08:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.009 00:08:43 -- host/auth.sh@44 -- # digest=sha256 00:24:14.009 00:08:43 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.009 00:08:43 -- host/auth.sh@44 -- # keyid=0 00:24:14.009 00:08:43 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:14.009 00:08:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.009 00:08:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:14.009 00:08:44 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:14.009 00:08:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:24:14.009 00:08:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.009 00:08:44 -- host/auth.sh@68 -- # digest=sha256 00:24:14.009 00:08:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:14.009 00:08:44 -- host/auth.sh@68 -- # keyid=0 00:24:14.009 00:08:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.009 00:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.009 00:08:44 -- common/autotest_common.sh@10 -- # set +x 00:24:14.009 00:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.009 00:08:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.009 00:08:44 -- nvmf/common.sh@717 -- # local ip 00:24:14.009 00:08:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.009 00:08:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.009 00:08:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.009 00:08:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.009 00:08:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.009 00:08:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.009 00:08:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.009 00:08:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.009 00:08:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.009 00:08:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:14.009 00:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.009 00:08:44 -- common/autotest_common.sh@10 -- # set +x 00:24:14.582 nvme0n1 00:24:14.582 00:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.582 00:08:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.582 00:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.582 00:08:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.582 00:08:44 -- common/autotest_common.sh@10 -- # set +x 00:24:14.582 00:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.845 00:08:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.845 00:08:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.845 00:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.845 00:08:44 -- common/autotest_common.sh@10 -- # set +x 00:24:14.845 00:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.845 00:08:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.845 00:08:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:14.845 00:08:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.845 00:08:44 -- host/auth.sh@44 -- # digest=sha256 00:24:14.845 00:08:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.845 00:08:44 -- host/auth.sh@44 -- # keyid=1 00:24:14.845 00:08:44 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:14.845 00:08:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.845 00:08:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:14.845 00:08:44 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:14.845 00:08:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:24:14.845 00:08:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.845 00:08:44 -- host/auth.sh@68 -- # digest=sha256 00:24:14.845 00:08:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:14.845 00:08:44 -- host/auth.sh@68 -- # keyid=1 00:24:14.845 00:08:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.845 00:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.845 00:08:44 -- common/autotest_common.sh@10 -- # set +x 00:24:14.845 00:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.845 00:08:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.845 00:08:44 -- nvmf/common.sh@717 -- # local ip 00:24:14.845 00:08:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.845 00:08:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.845 00:08:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.845 00:08:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.845 00:08:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.845 00:08:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.845 00:08:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.845 00:08:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.845 00:08:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.845 00:08:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:14.845 00:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.845 00:08:44 -- common/autotest_common.sh@10 -- # set +x 00:24:15.418 nvme0n1 00:24:15.418 00:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.418 00:08:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.418 00:08:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.418 00:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.418 00:08:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.418 00:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.679 00:08:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.679 00:08:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.679 00:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.679 00:08:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.679 00:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.679 00:08:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.679 00:08:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:15.679 00:08:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.679 00:08:45 -- host/auth.sh@44 -- # digest=sha256 00:24:15.679 00:08:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:15.679 00:08:45 -- host/auth.sh@44 -- # keyid=2 00:24:15.679 00:08:45 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:15.679 00:08:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.679 00:08:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:15.679 00:08:45 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:15.679 00:08:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:24:15.679 00:08:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.679 00:08:45 -- host/auth.sh@68 -- # digest=sha256 00:24:15.679 00:08:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:15.679 00:08:45 -- host/auth.sh@68 -- # keyid=2 00:24:15.679 00:08:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:15.679 00:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.679 00:08:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.679 00:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.679 00:08:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.679 00:08:45 -- nvmf/common.sh@717 -- # local ip 00:24:15.679 00:08:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.679 00:08:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.679 00:08:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.679 00:08:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.679 00:08:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.679 00:08:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.679 00:08:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.679 00:08:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.679 00:08:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.679 00:08:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:15.679 00:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.679 00:08:45 -- common/autotest_common.sh@10 -- # set +x 00:24:16.251 nvme0n1 00:24:16.252 00:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.252 00:08:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.252 00:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.252 00:08:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.252 00:08:46 -- common/autotest_common.sh@10 -- # set +x 00:24:16.252 00:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.252 00:08:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.252 00:08:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.252 00:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.252 00:08:46 -- common/autotest_common.sh@10 -- # set +x 00:24:16.252 00:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.252 00:08:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.252 00:08:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:16.252 00:08:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.252 00:08:46 -- host/auth.sh@44 -- # digest=sha256 00:24:16.252 00:08:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:16.252 00:08:46 -- host/auth.sh@44 -- # keyid=3 00:24:16.252 00:08:46 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:16.252 00:08:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:16.252 00:08:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:16.252 00:08:46 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:16.252 00:08:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:24:16.252 00:08:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.252 00:08:46 -- host/auth.sh@68 -- # digest=sha256 00:24:16.252 00:08:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:16.252 00:08:46 -- host/auth.sh@68 -- # keyid=3 00:24:16.252 00:08:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:16.252 00:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.252 00:08:46 -- common/autotest_common.sh@10 -- # set +x 00:24:16.252 00:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.252 00:08:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.252 00:08:46 -- nvmf/common.sh@717 -- # local ip 00:24:16.252 00:08:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.252 00:08:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.252 00:08:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.252 00:08:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.252 00:08:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.252 00:08:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.252 00:08:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.252 00:08:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.252 00:08:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.252 00:08:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:16.252 00:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.252 00:08:46 -- common/autotest_common.sh@10 -- # set +x 00:24:17.194 nvme0n1 00:24:17.194 00:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.194 00:08:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.194 00:08:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.194 00:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.194 00:08:47 -- common/autotest_common.sh@10 -- # set +x 00:24:17.194 00:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.194 00:08:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.194 00:08:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.194 00:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.194 00:08:47 -- common/autotest_common.sh@10 -- # set +x 00:24:17.194 00:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.194 00:08:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.194 00:08:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:17.194 00:08:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.194 00:08:47 -- host/auth.sh@44 -- # digest=sha256 00:24:17.194 00:08:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.194 00:08:47 -- host/auth.sh@44 -- # keyid=4 00:24:17.194 00:08:47 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:17.194 00:08:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:17.194 00:08:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:17.194 00:08:47 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:17.194 00:08:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:24:17.194 00:08:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.194 00:08:47 -- host/auth.sh@68 -- # digest=sha256 00:24:17.194 00:08:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:17.194 00:08:47 -- host/auth.sh@68 -- # keyid=4 00:24:17.194 00:08:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:17.194 00:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.194 00:08:47 -- common/autotest_common.sh@10 -- # set +x 00:24:17.194 00:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.194 00:08:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.194 00:08:47 -- nvmf/common.sh@717 -- # local ip 00:24:17.194 00:08:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.194 00:08:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.194 00:08:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.194 00:08:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.194 00:08:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.194 00:08:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.194 00:08:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.194 00:08:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.194 00:08:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.194 00:08:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.194 00:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.194 00:08:47 -- common/autotest_common.sh@10 -- # set +x 00:24:18.135 nvme0n1 00:24:18.135 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.135 00:08:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.135 00:08:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.135 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.135 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.135 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.135 00:08:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.135 00:08:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.135 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.135 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.135 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.136 00:08:48 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:18.136 00:08:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.136 00:08:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.136 00:08:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:18.136 00:08:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.136 00:08:48 -- host/auth.sh@44 -- # digest=sha384 00:24:18.136 00:08:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.136 00:08:48 -- host/auth.sh@44 -- # keyid=0 00:24:18.136 00:08:48 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:18.136 00:08:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:18.136 00:08:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:18.136 00:08:48 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:18.136 00:08:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:24:18.136 00:08:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.136 00:08:48 -- host/auth.sh@68 -- # digest=sha384 00:24:18.136 00:08:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:18.136 00:08:48 -- host/auth.sh@68 -- # keyid=0 00:24:18.136 00:08:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:18.136 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.136 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.136 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.136 00:08:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.136 00:08:48 -- nvmf/common.sh@717 -- # local ip 00:24:18.136 00:08:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.136 00:08:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.136 00:08:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.136 00:08:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.136 00:08:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.136 00:08:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.136 00:08:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.136 00:08:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.136 00:08:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.136 00:08:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:18.136 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.136 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.136 nvme0n1 00:24:18.136 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.136 00:08:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.136 00:08:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.136 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.136 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.136 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.136 00:08:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.136 00:08:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.136 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.136 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.136 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.136 00:08:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.136 00:08:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:18.136 00:08:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.136 00:08:48 -- host/auth.sh@44 -- # digest=sha384 00:24:18.136 00:08:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.136 00:08:48 -- host/auth.sh@44 -- # keyid=1 00:24:18.136 00:08:48 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:18.136 00:08:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:18.136 00:08:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:18.136 00:08:48 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:18.136 00:08:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:24:18.136 00:08:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.136 00:08:48 -- host/auth.sh@68 -- # digest=sha384 00:24:18.136 00:08:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:18.136 00:08:48 -- host/auth.sh@68 -- # keyid=1 00:24:18.136 00:08:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:18.136 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.136 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.136 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.136 00:08:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.136 00:08:48 -- nvmf/common.sh@717 -- # local ip 00:24:18.136 00:08:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.136 00:08:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.136 00:08:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.136 00:08:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.136 00:08:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.136 00:08:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.136 00:08:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.136 00:08:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.136 00:08:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.136 00:08:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:18.136 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.136 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.397 nvme0n1 00:24:18.397 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.397 00:08:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.397 00:08:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.397 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.397 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.397 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.397 00:08:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.397 00:08:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.397 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.397 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.397 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.397 00:08:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.397 00:08:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:18.397 00:08:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.397 00:08:48 -- host/auth.sh@44 -- # digest=sha384 00:24:18.397 00:08:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.397 00:08:48 -- host/auth.sh@44 -- # keyid=2 00:24:18.397 00:08:48 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:18.397 00:08:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:18.397 00:08:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:18.397 00:08:48 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:18.397 00:08:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:24:18.397 00:08:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.397 00:08:48 -- host/auth.sh@68 -- # digest=sha384 00:24:18.397 00:08:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:18.397 00:08:48 -- host/auth.sh@68 -- # keyid=2 00:24:18.397 00:08:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:18.397 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.397 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.397 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.397 00:08:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.397 00:08:48 -- nvmf/common.sh@717 -- # local ip 00:24:18.397 00:08:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.397 00:08:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.397 00:08:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.397 00:08:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.397 00:08:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.397 00:08:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.397 00:08:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.397 00:08:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.397 00:08:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.397 00:08:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:18.397 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.397 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.658 nvme0n1 00:24:18.658 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.658 00:08:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.658 00:08:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.658 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.658 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.658 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.658 00:08:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.658 00:08:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.658 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.658 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.658 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.658 00:08:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.658 00:08:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:18.658 00:08:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.658 00:08:48 -- host/auth.sh@44 -- # digest=sha384 00:24:18.658 00:08:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.658 00:08:48 -- host/auth.sh@44 -- # keyid=3 00:24:18.658 00:08:48 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:18.658 00:08:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:18.658 00:08:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:18.658 00:08:48 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:18.658 00:08:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:24:18.658 00:08:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.658 00:08:48 -- host/auth.sh@68 -- # digest=sha384 00:24:18.658 00:08:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:18.658 00:08:48 -- host/auth.sh@68 -- # keyid=3 00:24:18.658 00:08:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:18.658 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.658 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.658 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.658 00:08:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.658 00:08:48 -- nvmf/common.sh@717 -- # local ip 00:24:18.658 00:08:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.658 00:08:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.658 00:08:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.658 00:08:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.658 00:08:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.658 00:08:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.658 00:08:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.658 00:08:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.658 00:08:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.658 00:08:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:18.658 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.658 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.919 nvme0n1 00:24:18.919 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.919 00:08:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.919 00:08:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.919 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.919 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.919 00:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.919 00:08:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.919 00:08:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.919 00:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.919 00:08:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.919 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.919 00:08:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.919 00:08:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:18.919 00:08:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.919 00:08:49 -- host/auth.sh@44 -- # digest=sha384 00:24:18.919 00:08:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.919 00:08:49 -- host/auth.sh@44 -- # keyid=4 00:24:18.919 00:08:49 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:18.919 00:08:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:18.919 00:08:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:18.919 00:08:49 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:18.919 00:08:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:24:18.919 00:08:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.919 00:08:49 -- host/auth.sh@68 -- # digest=sha384 00:24:18.919 00:08:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:18.919 00:08:49 -- host/auth.sh@68 -- # keyid=4 00:24:18.919 00:08:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:18.919 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.919 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:18.919 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.919 00:08:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.920 00:08:49 -- nvmf/common.sh@717 -- # local ip 00:24:18.920 00:08:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.920 00:08:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.920 00:08:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.920 00:08:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.920 00:08:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.920 00:08:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.920 00:08:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.920 00:08:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.920 00:08:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.920 00:08:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:18.920 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.920 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.182 nvme0n1 00:24:19.182 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.182 00:08:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.182 00:08:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.182 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.182 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.182 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.182 00:08:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.182 00:08:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.182 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.182 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.182 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.182 00:08:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.182 00:08:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.182 00:08:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:19.182 00:08:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.182 00:08:49 -- host/auth.sh@44 -- # digest=sha384 00:24:19.182 00:08:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:19.182 00:08:49 -- host/auth.sh@44 -- # keyid=0 00:24:19.182 00:08:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:19.182 00:08:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:19.182 00:08:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:19.182 00:08:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:19.182 00:08:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:24:19.182 00:08:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.182 00:08:49 -- host/auth.sh@68 -- # digest=sha384 00:24:19.182 00:08:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:19.182 00:08:49 -- host/auth.sh@68 -- # keyid=0 00:24:19.182 00:08:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:19.182 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.182 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.182 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.182 00:08:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.182 00:08:49 -- nvmf/common.sh@717 -- # local ip 00:24:19.182 00:08:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.182 00:08:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.182 00:08:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.182 00:08:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.182 00:08:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.182 00:08:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.182 00:08:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.182 00:08:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.182 00:08:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.182 00:08:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:19.182 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.182 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.443 nvme0n1 00:24:19.443 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.443 00:08:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.443 00:08:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.443 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.443 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.443 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.443 00:08:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.443 00:08:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.443 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.443 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.443 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.443 00:08:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.443 00:08:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:19.443 00:08:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.443 00:08:49 -- host/auth.sh@44 -- # digest=sha384 00:24:19.443 00:08:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:19.443 00:08:49 -- host/auth.sh@44 -- # keyid=1 00:24:19.443 00:08:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:19.443 00:08:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:19.443 00:08:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:19.443 00:08:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:19.443 00:08:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:24:19.443 00:08:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.443 00:08:49 -- host/auth.sh@68 -- # digest=sha384 00:24:19.443 00:08:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:19.443 00:08:49 -- host/auth.sh@68 -- # keyid=1 00:24:19.443 00:08:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:19.443 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.443 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.443 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.443 00:08:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.443 00:08:49 -- nvmf/common.sh@717 -- # local ip 00:24:19.443 00:08:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.443 00:08:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.443 00:08:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.443 00:08:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.443 00:08:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.443 00:08:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.443 00:08:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.443 00:08:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.443 00:08:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.443 00:08:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:19.444 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.444 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.704 nvme0n1 00:24:19.704 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.704 00:08:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.704 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.704 00:08:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.704 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.704 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.704 00:08:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.704 00:08:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.704 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.704 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.704 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.704 00:08:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.704 00:08:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:19.704 00:08:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.704 00:08:49 -- host/auth.sh@44 -- # digest=sha384 00:24:19.704 00:08:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:19.704 00:08:49 -- host/auth.sh@44 -- # keyid=2 00:24:19.704 00:08:49 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:19.704 00:08:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:19.704 00:08:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:19.704 00:08:49 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:19.704 00:08:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:24:19.704 00:08:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.704 00:08:49 -- host/auth.sh@68 -- # digest=sha384 00:24:19.704 00:08:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:19.704 00:08:49 -- host/auth.sh@68 -- # keyid=2 00:24:19.704 00:08:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:19.704 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.704 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.704 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.704 00:08:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.704 00:08:49 -- nvmf/common.sh@717 -- # local ip 00:24:19.704 00:08:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.704 00:08:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.704 00:08:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.704 00:08:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.704 00:08:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.704 00:08:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.704 00:08:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.704 00:08:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.704 00:08:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.704 00:08:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:19.704 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.704 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.973 nvme0n1 00:24:19.973 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.973 00:08:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.973 00:08:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.973 00:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.973 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:19.973 00:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.973 00:08:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.973 00:08:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.973 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.973 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:19.973 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.973 00:08:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.973 00:08:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:19.973 00:08:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.973 00:08:50 -- host/auth.sh@44 -- # digest=sha384 00:24:19.973 00:08:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:19.973 00:08:50 -- host/auth.sh@44 -- # keyid=3 00:24:19.973 00:08:50 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:19.973 00:08:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:19.973 00:08:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:19.973 00:08:50 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:19.973 00:08:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:24:19.973 00:08:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.973 00:08:50 -- host/auth.sh@68 -- # digest=sha384 00:24:19.973 00:08:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:19.973 00:08:50 -- host/auth.sh@68 -- # keyid=3 00:24:19.973 00:08:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:19.973 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.973 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:19.973 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.973 00:08:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.973 00:08:50 -- nvmf/common.sh@717 -- # local ip 00:24:19.973 00:08:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.973 00:08:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.973 00:08:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.973 00:08:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.973 00:08:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.973 00:08:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.973 00:08:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.973 00:08:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.973 00:08:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.973 00:08:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:19.973 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.973 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.238 nvme0n1 00:24:20.238 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.238 00:08:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.238 00:08:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.238 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.238 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.238 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.238 00:08:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.238 00:08:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.238 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.238 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.238 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.238 00:08:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.238 00:08:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:20.238 00:08:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.238 00:08:50 -- host/auth.sh@44 -- # digest=sha384 00:24:20.238 00:08:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.238 00:08:50 -- host/auth.sh@44 -- # keyid=4 00:24:20.238 00:08:50 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:20.238 00:08:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:20.238 00:08:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:20.238 00:08:50 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:20.238 00:08:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:24:20.238 00:08:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.238 00:08:50 -- host/auth.sh@68 -- # digest=sha384 00:24:20.238 00:08:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:20.238 00:08:50 -- host/auth.sh@68 -- # keyid=4 00:24:20.238 00:08:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.238 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.238 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.238 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.238 00:08:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.238 00:08:50 -- nvmf/common.sh@717 -- # local ip 00:24:20.238 00:08:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.238 00:08:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.238 00:08:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.238 00:08:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.238 00:08:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:20.238 00:08:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.238 00:08:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:20.238 00:08:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:20.238 00:08:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:20.238 00:08:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.238 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.238 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.499 nvme0n1 00:24:20.499 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.499 00:08:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.499 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.499 00:08:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.499 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.499 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.499 00:08:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.499 00:08:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.499 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.499 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.499 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.499 00:08:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:20.499 00:08:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.499 00:08:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:20.499 00:08:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.499 00:08:50 -- host/auth.sh@44 -- # digest=sha384 00:24:20.499 00:08:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:20.499 00:08:50 -- host/auth.sh@44 -- # keyid=0 00:24:20.499 00:08:50 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:20.499 00:08:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:20.499 00:08:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:20.499 00:08:50 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:20.499 00:08:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:24:20.499 00:08:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.499 00:08:50 -- host/auth.sh@68 -- # digest=sha384 00:24:20.499 00:08:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:20.499 00:08:50 -- host/auth.sh@68 -- # keyid=0 00:24:20.499 00:08:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:20.499 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.499 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.499 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.499 00:08:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.499 00:08:50 -- nvmf/common.sh@717 -- # local ip 00:24:20.499 00:08:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.499 00:08:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.499 00:08:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.499 00:08:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.499 00:08:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:20.499 00:08:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.499 00:08:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:20.499 00:08:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:20.499 00:08:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:20.499 00:08:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:20.499 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.499 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.759 nvme0n1 00:24:20.759 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.759 00:08:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.759 00:08:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.759 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.759 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.759 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.759 00:08:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.759 00:08:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.759 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.759 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.759 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.759 00:08:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.759 00:08:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:20.759 00:08:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.759 00:08:50 -- host/auth.sh@44 -- # digest=sha384 00:24:20.759 00:08:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:20.759 00:08:50 -- host/auth.sh@44 -- # keyid=1 00:24:20.759 00:08:50 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:20.759 00:08:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:20.759 00:08:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:20.759 00:08:50 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:20.759 00:08:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:24:20.759 00:08:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.759 00:08:50 -- host/auth.sh@68 -- # digest=sha384 00:24:20.759 00:08:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:20.759 00:08:50 -- host/auth.sh@68 -- # keyid=1 00:24:20.759 00:08:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:20.759 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.759 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.759 00:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.759 00:08:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.759 00:08:50 -- nvmf/common.sh@717 -- # local ip 00:24:20.759 00:08:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.759 00:08:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.759 00:08:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.759 00:08:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.759 00:08:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:20.759 00:08:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.759 00:08:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:20.759 00:08:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:20.759 00:08:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:20.759 00:08:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:20.759 00:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.759 00:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:21.020 nvme0n1 00:24:21.020 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.020 00:08:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.020 00:08:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:21.020 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.020 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.020 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.020 00:08:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.020 00:08:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.020 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.020 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.020 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.020 00:08:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.020 00:08:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:21.020 00:08:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.020 00:08:51 -- host/auth.sh@44 -- # digest=sha384 00:24:21.020 00:08:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.020 00:08:51 -- host/auth.sh@44 -- # keyid=2 00:24:21.020 00:08:51 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:21.020 00:08:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:21.281 00:08:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:21.281 00:08:51 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:21.281 00:08:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:24:21.281 00:08:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.281 00:08:51 -- host/auth.sh@68 -- # digest=sha384 00:24:21.281 00:08:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:21.281 00:08:51 -- host/auth.sh@68 -- # keyid=2 00:24:21.281 00:08:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:21.281 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.281 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.281 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.281 00:08:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.281 00:08:51 -- nvmf/common.sh@717 -- # local ip 00:24:21.281 00:08:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.281 00:08:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.281 00:08:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.281 00:08:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.281 00:08:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:21.281 00:08:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.281 00:08:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:21.281 00:08:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:21.281 00:08:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:21.281 00:08:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:21.281 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.281 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.541 nvme0n1 00:24:21.541 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.541 00:08:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.541 00:08:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:21.541 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.541 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.541 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.541 00:08:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.541 00:08:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.541 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.541 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.541 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.541 00:08:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.541 00:08:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:21.541 00:08:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.541 00:08:51 -- host/auth.sh@44 -- # digest=sha384 00:24:21.541 00:08:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.541 00:08:51 -- host/auth.sh@44 -- # keyid=3 00:24:21.541 00:08:51 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:21.542 00:08:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:21.542 00:08:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:21.542 00:08:51 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:21.542 00:08:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:24:21.542 00:08:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.542 00:08:51 -- host/auth.sh@68 -- # digest=sha384 00:24:21.542 00:08:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:21.542 00:08:51 -- host/auth.sh@68 -- # keyid=3 00:24:21.542 00:08:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:21.542 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.542 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.542 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.542 00:08:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.542 00:08:51 -- nvmf/common.sh@717 -- # local ip 00:24:21.542 00:08:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.542 00:08:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.542 00:08:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.542 00:08:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.542 00:08:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:21.542 00:08:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.542 00:08:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:21.542 00:08:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:21.542 00:08:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:21.542 00:08:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:21.542 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.542 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.803 nvme0n1 00:24:21.803 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.803 00:08:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.803 00:08:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:21.803 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.803 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.804 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.804 00:08:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.804 00:08:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.804 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.804 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.804 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.804 00:08:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.804 00:08:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:21.804 00:08:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.804 00:08:51 -- host/auth.sh@44 -- # digest=sha384 00:24:21.804 00:08:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.804 00:08:51 -- host/auth.sh@44 -- # keyid=4 00:24:21.804 00:08:51 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:21.804 00:08:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:21.804 00:08:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:21.804 00:08:51 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:21.804 00:08:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:24:21.804 00:08:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.804 00:08:51 -- host/auth.sh@68 -- # digest=sha384 00:24:21.804 00:08:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:21.804 00:08:51 -- host/auth.sh@68 -- # keyid=4 00:24:21.804 00:08:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:21.804 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.804 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.804 00:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.804 00:08:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.804 00:08:51 -- nvmf/common.sh@717 -- # local ip 00:24:21.804 00:08:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.804 00:08:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.804 00:08:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.804 00:08:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.804 00:08:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:21.804 00:08:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.804 00:08:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:21.804 00:08:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:21.804 00:08:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:21.804 00:08:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.804 00:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.804 00:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.064 nvme0n1 00:24:22.064 00:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.064 00:08:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.064 00:08:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:22.064 00:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.064 00:08:52 -- common/autotest_common.sh@10 -- # set +x 00:24:22.064 00:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.325 00:08:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.325 00:08:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.325 00:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.325 00:08:52 -- common/autotest_common.sh@10 -- # set +x 00:24:22.325 00:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.325 00:08:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.325 00:08:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:22.325 00:08:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:22.325 00:08:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:22.325 00:08:52 -- host/auth.sh@44 -- # digest=sha384 00:24:22.325 00:08:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:22.325 00:08:52 -- host/auth.sh@44 -- # keyid=0 00:24:22.325 00:08:52 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:22.325 00:08:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:22.325 00:08:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:22.325 00:08:52 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:22.325 00:08:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:24:22.325 00:08:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:22.325 00:08:52 -- host/auth.sh@68 -- # digest=sha384 00:24:22.325 00:08:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:22.325 00:08:52 -- host/auth.sh@68 -- # keyid=0 00:24:22.325 00:08:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:22.325 00:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.325 00:08:52 -- common/autotest_common.sh@10 -- # set +x 00:24:22.325 00:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.325 00:08:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:22.325 00:08:52 -- nvmf/common.sh@717 -- # local ip 00:24:22.325 00:08:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:22.325 00:08:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:22.325 00:08:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.325 00:08:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.325 00:08:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:22.325 00:08:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.325 00:08:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:22.325 00:08:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:22.325 00:08:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:22.325 00:08:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:22.325 00:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.325 00:08:52 -- common/autotest_common.sh@10 -- # set +x 00:24:22.586 nvme0n1 00:24:22.586 00:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.586 00:08:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.586 00:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.586 00:08:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:22.586 00:08:52 -- common/autotest_common.sh@10 -- # set +x 00:24:22.586 00:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.847 00:08:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.847 00:08:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.847 00:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.847 00:08:52 -- common/autotest_common.sh@10 -- # set +x 00:24:22.848 00:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.848 00:08:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:22.848 00:08:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:22.848 00:08:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:22.848 00:08:52 -- host/auth.sh@44 -- # digest=sha384 00:24:22.848 00:08:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:22.848 00:08:52 -- host/auth.sh@44 -- # keyid=1 00:24:22.848 00:08:52 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:22.848 00:08:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:22.848 00:08:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:22.848 00:08:52 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:22.848 00:08:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:24:22.848 00:08:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:22.848 00:08:52 -- host/auth.sh@68 -- # digest=sha384 00:24:22.848 00:08:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:22.848 00:08:52 -- host/auth.sh@68 -- # keyid=1 00:24:22.848 00:08:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:22.848 00:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.848 00:08:52 -- common/autotest_common.sh@10 -- # set +x 00:24:22.848 00:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.848 00:08:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:22.848 00:08:52 -- nvmf/common.sh@717 -- # local ip 00:24:22.848 00:08:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:22.848 00:08:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:22.848 00:08:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.848 00:08:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.848 00:08:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:22.848 00:08:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.848 00:08:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:22.848 00:08:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:22.848 00:08:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:22.848 00:08:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:22.848 00:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.848 00:08:52 -- common/autotest_common.sh@10 -- # set +x 00:24:23.108 nvme0n1 00:24:23.108 00:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.108 00:08:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.108 00:08:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.108 00:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.108 00:08:53 -- common/autotest_common.sh@10 -- # set +x 00:24:23.108 00:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.369 00:08:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.369 00:08:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.369 00:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.369 00:08:53 -- common/autotest_common.sh@10 -- # set +x 00:24:23.369 00:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.369 00:08:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.369 00:08:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:23.369 00:08:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.369 00:08:53 -- host/auth.sh@44 -- # digest=sha384 00:24:23.369 00:08:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:23.369 00:08:53 -- host/auth.sh@44 -- # keyid=2 00:24:23.369 00:08:53 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:23.369 00:08:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:23.369 00:08:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:23.369 00:08:53 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:23.369 00:08:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:24:23.369 00:08:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.369 00:08:53 -- host/auth.sh@68 -- # digest=sha384 00:24:23.369 00:08:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:23.369 00:08:53 -- host/auth.sh@68 -- # keyid=2 00:24:23.369 00:08:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:23.369 00:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.369 00:08:53 -- common/autotest_common.sh@10 -- # set +x 00:24:23.369 00:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.369 00:08:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.369 00:08:53 -- nvmf/common.sh@717 -- # local ip 00:24:23.369 00:08:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.369 00:08:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.369 00:08:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.369 00:08:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.369 00:08:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:23.369 00:08:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.369 00:08:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:23.369 00:08:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:23.369 00:08:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:23.369 00:08:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:23.369 00:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.369 00:08:53 -- common/autotest_common.sh@10 -- # set +x 00:24:23.940 nvme0n1 00:24:23.940 00:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.940 00:08:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.940 00:08:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.940 00:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.940 00:08:53 -- common/autotest_common.sh@10 -- # set +x 00:24:23.940 00:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.940 00:08:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.940 00:08:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.940 00:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.940 00:08:53 -- common/autotest_common.sh@10 -- # set +x 00:24:23.940 00:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.940 00:08:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.940 00:08:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:23.940 00:08:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.940 00:08:53 -- host/auth.sh@44 -- # digest=sha384 00:24:23.940 00:08:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:23.940 00:08:53 -- host/auth.sh@44 -- # keyid=3 00:24:23.940 00:08:53 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:23.940 00:08:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:23.940 00:08:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:23.940 00:08:53 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:23.940 00:08:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:24:23.940 00:08:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.940 00:08:53 -- host/auth.sh@68 -- # digest=sha384 00:24:23.940 00:08:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:23.940 00:08:53 -- host/auth.sh@68 -- # keyid=3 00:24:23.940 00:08:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:23.940 00:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.940 00:08:53 -- common/autotest_common.sh@10 -- # set +x 00:24:23.940 00:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.940 00:08:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.940 00:08:53 -- nvmf/common.sh@717 -- # local ip 00:24:23.940 00:08:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.940 00:08:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.940 00:08:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.940 00:08:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.940 00:08:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:23.940 00:08:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.940 00:08:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:23.940 00:08:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:23.940 00:08:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:23.940 00:08:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:23.940 00:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.940 00:08:53 -- common/autotest_common.sh@10 -- # set +x 00:24:24.201 nvme0n1 00:24:24.201 00:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.201 00:08:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.202 00:08:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.202 00:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.202 00:08:54 -- common/autotest_common.sh@10 -- # set +x 00:24:24.202 00:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.462 00:08:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.463 00:08:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.463 00:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.463 00:08:54 -- common/autotest_common.sh@10 -- # set +x 00:24:24.463 00:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.463 00:08:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.463 00:08:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:24.463 00:08:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.463 00:08:54 -- host/auth.sh@44 -- # digest=sha384 00:24:24.463 00:08:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.463 00:08:54 -- host/auth.sh@44 -- # keyid=4 00:24:24.463 00:08:54 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:24.463 00:08:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.463 00:08:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:24.463 00:08:54 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:24.463 00:08:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:24:24.463 00:08:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.463 00:08:54 -- host/auth.sh@68 -- # digest=sha384 00:24:24.463 00:08:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:24.463 00:08:54 -- host/auth.sh@68 -- # keyid=4 00:24:24.463 00:08:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:24.463 00:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.463 00:08:54 -- common/autotest_common.sh@10 -- # set +x 00:24:24.463 00:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.463 00:08:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.463 00:08:54 -- nvmf/common.sh@717 -- # local ip 00:24:24.463 00:08:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.463 00:08:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.463 00:08:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.463 00:08:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.463 00:08:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.463 00:08:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.463 00:08:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.463 00:08:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.463 00:08:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.463 00:08:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.463 00:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.463 00:08:54 -- common/autotest_common.sh@10 -- # set +x 00:24:24.723 nvme0n1 00:24:24.723 00:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.984 00:08:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.984 00:08:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.984 00:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.984 00:08:54 -- common/autotest_common.sh@10 -- # set +x 00:24:24.984 00:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.984 00:08:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.984 00:08:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.984 00:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.984 00:08:54 -- common/autotest_common.sh@10 -- # set +x 00:24:24.984 00:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.984 00:08:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.984 00:08:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.984 00:08:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:24.984 00:08:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.984 00:08:55 -- host/auth.sh@44 -- # digest=sha384 00:24:24.984 00:08:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:24.984 00:08:55 -- host/auth.sh@44 -- # keyid=0 00:24:24.984 00:08:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:24.984 00:08:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.984 00:08:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:24.984 00:08:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:24.984 00:08:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:24:24.984 00:08:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.984 00:08:55 -- host/auth.sh@68 -- # digest=sha384 00:24:24.984 00:08:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:24.984 00:08:55 -- host/auth.sh@68 -- # keyid=0 00:24:24.984 00:08:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:24.984 00:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.984 00:08:55 -- common/autotest_common.sh@10 -- # set +x 00:24:24.984 00:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.984 00:08:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.984 00:08:55 -- nvmf/common.sh@717 -- # local ip 00:24:24.984 00:08:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.984 00:08:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.984 00:08:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.984 00:08:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.984 00:08:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.984 00:08:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.984 00:08:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.984 00:08:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.984 00:08:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.984 00:08:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:24.984 00:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.984 00:08:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.554 nvme0n1 00:24:25.554 00:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.816 00:08:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.816 00:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.816 00:08:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.816 00:08:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.816 00:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.816 00:08:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.816 00:08:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.816 00:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.816 00:08:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.816 00:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.816 00:08:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.816 00:08:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:25.816 00:08:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.816 00:08:55 -- host/auth.sh@44 -- # digest=sha384 00:24:25.816 00:08:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:25.816 00:08:55 -- host/auth.sh@44 -- # keyid=1 00:24:25.816 00:08:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:25.816 00:08:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.816 00:08:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:25.816 00:08:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:25.816 00:08:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:24:25.816 00:08:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.817 00:08:55 -- host/auth.sh@68 -- # digest=sha384 00:24:25.817 00:08:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:25.817 00:08:55 -- host/auth.sh@68 -- # keyid=1 00:24:25.817 00:08:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:25.817 00:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.817 00:08:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.817 00:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.817 00:08:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.817 00:08:55 -- nvmf/common.sh@717 -- # local ip 00:24:25.817 00:08:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.817 00:08:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.817 00:08:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.817 00:08:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.817 00:08:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.817 00:08:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.817 00:08:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.817 00:08:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.817 00:08:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.817 00:08:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:25.817 00:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.817 00:08:55 -- common/autotest_common.sh@10 -- # set +x 00:24:26.389 nvme0n1 00:24:26.389 00:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.389 00:08:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.389 00:08:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.389 00:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.389 00:08:56 -- common/autotest_common.sh@10 -- # set +x 00:24:26.651 00:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.651 00:08:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.651 00:08:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.651 00:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.651 00:08:56 -- common/autotest_common.sh@10 -- # set +x 00:24:26.651 00:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.651 00:08:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.651 00:08:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:26.651 00:08:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.651 00:08:56 -- host/auth.sh@44 -- # digest=sha384 00:24:26.651 00:08:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.651 00:08:56 -- host/auth.sh@44 -- # keyid=2 00:24:26.651 00:08:56 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:26.651 00:08:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.651 00:08:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:26.651 00:08:56 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:26.651 00:08:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:24:26.651 00:08:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.651 00:08:56 -- host/auth.sh@68 -- # digest=sha384 00:24:26.651 00:08:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:26.651 00:08:56 -- host/auth.sh@68 -- # keyid=2 00:24:26.651 00:08:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:26.651 00:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.651 00:08:56 -- common/autotest_common.sh@10 -- # set +x 00:24:26.651 00:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.651 00:08:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.651 00:08:56 -- nvmf/common.sh@717 -- # local ip 00:24:26.651 00:08:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.651 00:08:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.651 00:08:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.651 00:08:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.651 00:08:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.651 00:08:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.651 00:08:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.651 00:08:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.651 00:08:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.651 00:08:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:26.651 00:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.651 00:08:56 -- common/autotest_common.sh@10 -- # set +x 00:24:27.225 nvme0n1 00:24:27.225 00:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.225 00:08:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.225 00:08:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.225 00:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.225 00:08:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.225 00:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.486 00:08:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.486 00:08:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.486 00:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.486 00:08:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.486 00:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.486 00:08:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.486 00:08:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:27.486 00:08:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.486 00:08:57 -- host/auth.sh@44 -- # digest=sha384 00:24:27.486 00:08:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.486 00:08:57 -- host/auth.sh@44 -- # keyid=3 00:24:27.486 00:08:57 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:27.486 00:08:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:27.486 00:08:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:27.486 00:08:57 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:27.486 00:08:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:24:27.486 00:08:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.486 00:08:57 -- host/auth.sh@68 -- # digest=sha384 00:24:27.486 00:08:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:27.486 00:08:57 -- host/auth.sh@68 -- # keyid=3 00:24:27.486 00:08:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:27.486 00:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.486 00:08:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.486 00:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.486 00:08:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.486 00:08:57 -- nvmf/common.sh@717 -- # local ip 00:24:27.486 00:08:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.486 00:08:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.486 00:08:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.486 00:08:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.486 00:08:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.486 00:08:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.486 00:08:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.486 00:08:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.486 00:08:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.487 00:08:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:27.487 00:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.487 00:08:57 -- common/autotest_common.sh@10 -- # set +x 00:24:28.058 nvme0n1 00:24:28.058 00:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.058 00:08:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.058 00:08:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.058 00:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.058 00:08:58 -- common/autotest_common.sh@10 -- # set +x 00:24:28.058 00:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.058 00:08:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.058 00:08:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.058 00:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.058 00:08:58 -- common/autotest_common.sh@10 -- # set +x 00:24:28.058 00:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.058 00:08:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.058 00:08:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:28.058 00:08:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.058 00:08:58 -- host/auth.sh@44 -- # digest=sha384 00:24:28.058 00:08:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.058 00:08:58 -- host/auth.sh@44 -- # keyid=4 00:24:28.058 00:08:58 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:28.058 00:08:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:28.058 00:08:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:28.058 00:08:58 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:28.058 00:08:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:24:28.058 00:08:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.058 00:08:58 -- host/auth.sh@68 -- # digest=sha384 00:24:28.058 00:08:58 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:28.058 00:08:58 -- host/auth.sh@68 -- # keyid=4 00:24:28.058 00:08:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:28.058 00:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.058 00:08:58 -- common/autotest_common.sh@10 -- # set +x 00:24:28.058 00:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.058 00:08:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.058 00:08:58 -- nvmf/common.sh@717 -- # local ip 00:24:28.058 00:08:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.058 00:08:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.058 00:08:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.058 00:08:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.058 00:08:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.058 00:08:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.058 00:08:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.058 00:08:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.058 00:08:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.058 00:08:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.058 00:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.058 00:08:58 -- common/autotest_common.sh@10 -- # set +x 00:24:29.036 nvme0n1 00:24:29.036 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.036 00:08:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.036 00:08:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.036 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.036 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.036 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.036 00:08:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.036 00:08:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.036 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.036 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.036 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.036 00:08:59 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:29.036 00:08:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.036 00:08:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.036 00:08:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:29.036 00:08:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.036 00:08:59 -- host/auth.sh@44 -- # digest=sha512 00:24:29.036 00:08:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.036 00:08:59 -- host/auth.sh@44 -- # keyid=0 00:24:29.036 00:08:59 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:29.036 00:08:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:29.036 00:08:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:29.036 00:08:59 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:29.036 00:08:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:24:29.036 00:08:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.036 00:08:59 -- host/auth.sh@68 -- # digest=sha512 00:24:29.036 00:08:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:29.036 00:08:59 -- host/auth.sh@68 -- # keyid=0 00:24:29.036 00:08:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.036 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.036 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.036 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.036 00:08:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.036 00:08:59 -- nvmf/common.sh@717 -- # local ip 00:24:29.036 00:08:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.036 00:08:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.036 00:08:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.036 00:08:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.036 00:08:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.036 00:08:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.036 00:08:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.036 00:08:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.036 00:08:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.036 00:08:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:29.036 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.036 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.036 nvme0n1 00:24:29.036 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.036 00:08:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.036 00:08:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.036 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.036 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.297 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.297 00:08:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.297 00:08:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.297 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.297 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.297 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.297 00:08:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.297 00:08:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:29.297 00:08:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.297 00:08:59 -- host/auth.sh@44 -- # digest=sha512 00:24:29.297 00:08:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.297 00:08:59 -- host/auth.sh@44 -- # keyid=1 00:24:29.297 00:08:59 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:29.297 00:08:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:29.297 00:08:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:29.297 00:08:59 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:29.297 00:08:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:24:29.297 00:08:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.297 00:08:59 -- host/auth.sh@68 -- # digest=sha512 00:24:29.297 00:08:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:29.297 00:08:59 -- host/auth.sh@68 -- # keyid=1 00:24:29.297 00:08:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.297 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.297 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.297 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.297 00:08:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.297 00:08:59 -- nvmf/common.sh@717 -- # local ip 00:24:29.297 00:08:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.297 00:08:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.297 00:08:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.298 00:08:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.298 00:08:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.298 00:08:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.298 00:08:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.298 00:08:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.298 00:08:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.298 00:08:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:29.298 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.298 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.298 nvme0n1 00:24:29.298 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.298 00:08:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.298 00:08:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.298 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.298 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.298 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.298 00:08:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.298 00:08:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.298 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.558 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.558 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.558 00:08:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.558 00:08:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:29.558 00:08:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.558 00:08:59 -- host/auth.sh@44 -- # digest=sha512 00:24:29.558 00:08:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.558 00:08:59 -- host/auth.sh@44 -- # keyid=2 00:24:29.558 00:08:59 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:29.558 00:08:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:29.558 00:08:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:29.558 00:08:59 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:29.558 00:08:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:24:29.558 00:08:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.558 00:08:59 -- host/auth.sh@68 -- # digest=sha512 00:24:29.558 00:08:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:29.558 00:08:59 -- host/auth.sh@68 -- # keyid=2 00:24:29.558 00:08:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.558 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.558 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.558 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.558 00:08:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.558 00:08:59 -- nvmf/common.sh@717 -- # local ip 00:24:29.558 00:08:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.558 00:08:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.558 00:08:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.558 00:08:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.558 00:08:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.558 00:08:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.558 00:08:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.558 00:08:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.558 00:08:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.558 00:08:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:29.558 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.558 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.558 nvme0n1 00:24:29.559 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.559 00:08:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.559 00:08:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.559 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.559 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.559 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.559 00:08:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.559 00:08:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.559 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.559 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.559 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.559 00:08:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.559 00:08:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:29.559 00:08:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.559 00:08:59 -- host/auth.sh@44 -- # digest=sha512 00:24:29.559 00:08:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.559 00:08:59 -- host/auth.sh@44 -- # keyid=3 00:24:29.559 00:08:59 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:29.559 00:08:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:29.559 00:08:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:29.559 00:08:59 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:29.559 00:08:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:24:29.559 00:08:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.559 00:08:59 -- host/auth.sh@68 -- # digest=sha512 00:24:29.559 00:08:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:29.559 00:08:59 -- host/auth.sh@68 -- # keyid=3 00:24:29.559 00:08:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.559 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.559 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.559 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.559 00:08:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.559 00:08:59 -- nvmf/common.sh@717 -- # local ip 00:24:29.559 00:08:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.559 00:08:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.559 00:08:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.559 00:08:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.559 00:08:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.559 00:08:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.559 00:08:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.559 00:08:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.559 00:08:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.819 00:08:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:29.819 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.819 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.819 nvme0n1 00:24:29.819 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.819 00:08:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.819 00:08:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.819 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.819 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.819 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.819 00:08:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.819 00:08:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.819 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.819 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.819 00:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.819 00:08:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.819 00:08:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:29.819 00:08:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.819 00:08:59 -- host/auth.sh@44 -- # digest=sha512 00:24:29.819 00:08:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.819 00:08:59 -- host/auth.sh@44 -- # keyid=4 00:24:29.819 00:08:59 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:29.819 00:08:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:29.819 00:08:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:29.819 00:08:59 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:29.819 00:08:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:24:29.819 00:08:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.819 00:08:59 -- host/auth.sh@68 -- # digest=sha512 00:24:29.819 00:08:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:29.819 00:08:59 -- host/auth.sh@68 -- # keyid=4 00:24:29.819 00:08:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.819 00:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.819 00:08:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.819 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.819 00:09:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.819 00:09:00 -- nvmf/common.sh@717 -- # local ip 00:24:29.819 00:09:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.819 00:09:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.819 00:09:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.819 00:09:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.819 00:09:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.819 00:09:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.819 00:09:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.819 00:09:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.819 00:09:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.819 00:09:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.819 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.819 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.079 nvme0n1 00:24:30.079 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.079 00:09:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.079 00:09:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.079 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.079 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.079 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.079 00:09:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.079 00:09:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.079 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.079 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.079 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.079 00:09:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:30.079 00:09:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:30.079 00:09:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:30.079 00:09:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:30.079 00:09:00 -- host/auth.sh@44 -- # digest=sha512 00:24:30.079 00:09:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.079 00:09:00 -- host/auth.sh@44 -- # keyid=0 00:24:30.079 00:09:00 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:30.079 00:09:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:30.079 00:09:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:30.079 00:09:00 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:30.079 00:09:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:24:30.079 00:09:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:30.079 00:09:00 -- host/auth.sh@68 -- # digest=sha512 00:24:30.079 00:09:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:30.079 00:09:00 -- host/auth.sh@68 -- # keyid=0 00:24:30.079 00:09:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.079 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.079 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.079 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.079 00:09:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:30.079 00:09:00 -- nvmf/common.sh@717 -- # local ip 00:24:30.079 00:09:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:30.079 00:09:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:30.079 00:09:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.079 00:09:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.079 00:09:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:30.079 00:09:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.079 00:09:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:30.079 00:09:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:30.079 00:09:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:30.079 00:09:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:30.079 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.079 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.340 nvme0n1 00:24:30.340 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.340 00:09:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.340 00:09:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.340 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.340 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.340 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.340 00:09:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.340 00:09:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.340 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.340 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.340 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.340 00:09:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:30.340 00:09:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:30.340 00:09:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:30.340 00:09:00 -- host/auth.sh@44 -- # digest=sha512 00:24:30.340 00:09:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.340 00:09:00 -- host/auth.sh@44 -- # keyid=1 00:24:30.340 00:09:00 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:30.340 00:09:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:30.340 00:09:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:30.340 00:09:00 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:30.340 00:09:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:24:30.340 00:09:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:30.340 00:09:00 -- host/auth.sh@68 -- # digest=sha512 00:24:30.340 00:09:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:30.340 00:09:00 -- host/auth.sh@68 -- # keyid=1 00:24:30.340 00:09:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.340 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.340 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.340 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.340 00:09:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:30.340 00:09:00 -- nvmf/common.sh@717 -- # local ip 00:24:30.340 00:09:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:30.340 00:09:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:30.340 00:09:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.340 00:09:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.340 00:09:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:30.340 00:09:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.340 00:09:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:30.340 00:09:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:30.340 00:09:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:30.340 00:09:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:30.340 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.340 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.600 nvme0n1 00:24:30.600 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.600 00:09:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.600 00:09:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.600 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.600 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.600 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.600 00:09:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.600 00:09:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.600 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.600 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.600 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.600 00:09:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:30.600 00:09:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:30.600 00:09:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:30.600 00:09:00 -- host/auth.sh@44 -- # digest=sha512 00:24:30.600 00:09:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.600 00:09:00 -- host/auth.sh@44 -- # keyid=2 00:24:30.600 00:09:00 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:30.600 00:09:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:30.600 00:09:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:30.600 00:09:00 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:30.600 00:09:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:24:30.600 00:09:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:30.600 00:09:00 -- host/auth.sh@68 -- # digest=sha512 00:24:30.600 00:09:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:30.600 00:09:00 -- host/auth.sh@68 -- # keyid=2 00:24:30.600 00:09:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.600 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.600 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.600 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.600 00:09:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:30.600 00:09:00 -- nvmf/common.sh@717 -- # local ip 00:24:30.600 00:09:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:30.600 00:09:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:30.600 00:09:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.600 00:09:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.600 00:09:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:30.600 00:09:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.600 00:09:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:30.600 00:09:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:30.600 00:09:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:30.600 00:09:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:30.600 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.600 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.861 nvme0n1 00:24:30.861 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.861 00:09:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.861 00:09:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.861 00:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.861 00:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.861 00:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.861 00:09:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.861 00:09:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.861 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.861 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:30.861 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.861 00:09:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:30.861 00:09:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:30.861 00:09:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:30.861 00:09:01 -- host/auth.sh@44 -- # digest=sha512 00:24:30.861 00:09:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.861 00:09:01 -- host/auth.sh@44 -- # keyid=3 00:24:30.861 00:09:01 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:30.861 00:09:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:30.861 00:09:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:30.861 00:09:01 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:30.861 00:09:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:24:30.861 00:09:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:30.861 00:09:01 -- host/auth.sh@68 -- # digest=sha512 00:24:30.861 00:09:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:30.861 00:09:01 -- host/auth.sh@68 -- # keyid=3 00:24:30.861 00:09:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.861 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.861 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:30.861 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.861 00:09:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:30.861 00:09:01 -- nvmf/common.sh@717 -- # local ip 00:24:30.861 00:09:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:30.861 00:09:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:30.861 00:09:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.861 00:09:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.861 00:09:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:30.861 00:09:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.861 00:09:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:30.861 00:09:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:30.861 00:09:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:30.861 00:09:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:30.861 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.861 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.121 nvme0n1 00:24:31.121 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.121 00:09:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.121 00:09:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.121 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.121 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.121 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.121 00:09:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.121 00:09:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.121 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.121 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.121 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.121 00:09:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.121 00:09:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:31.121 00:09:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.121 00:09:01 -- host/auth.sh@44 -- # digest=sha512 00:24:31.121 00:09:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.121 00:09:01 -- host/auth.sh@44 -- # keyid=4 00:24:31.121 00:09:01 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:31.121 00:09:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:31.121 00:09:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:31.121 00:09:01 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:31.121 00:09:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:24:31.121 00:09:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.121 00:09:01 -- host/auth.sh@68 -- # digest=sha512 00:24:31.121 00:09:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:31.121 00:09:01 -- host/auth.sh@68 -- # keyid=4 00:24:31.121 00:09:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:31.121 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.121 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.121 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.121 00:09:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.121 00:09:01 -- nvmf/common.sh@717 -- # local ip 00:24:31.121 00:09:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.121 00:09:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.121 00:09:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.121 00:09:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.121 00:09:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.121 00:09:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.121 00:09:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.121 00:09:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.121 00:09:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.121 00:09:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.121 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.121 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.381 nvme0n1 00:24:31.381 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.381 00:09:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.381 00:09:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.381 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.381 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.381 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.381 00:09:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.381 00:09:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.381 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.381 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.381 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.381 00:09:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.381 00:09:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.381 00:09:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:31.381 00:09:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.381 00:09:01 -- host/auth.sh@44 -- # digest=sha512 00:24:31.381 00:09:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.381 00:09:01 -- host/auth.sh@44 -- # keyid=0 00:24:31.381 00:09:01 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:31.381 00:09:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:31.381 00:09:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:31.381 00:09:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:31.381 00:09:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:24:31.381 00:09:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.381 00:09:01 -- host/auth.sh@68 -- # digest=sha512 00:24:31.381 00:09:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:31.381 00:09:01 -- host/auth.sh@68 -- # keyid=0 00:24:31.381 00:09:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:31.381 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.381 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.381 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.381 00:09:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.381 00:09:01 -- nvmf/common.sh@717 -- # local ip 00:24:31.381 00:09:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.381 00:09:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.381 00:09:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.381 00:09:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.381 00:09:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.381 00:09:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.381 00:09:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.381 00:09:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.381 00:09:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.381 00:09:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:31.381 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.381 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.642 nvme0n1 00:24:31.642 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.642 00:09:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.642 00:09:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.642 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.642 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.902 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.902 00:09:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.902 00:09:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.902 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.902 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.902 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.902 00:09:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.902 00:09:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:31.902 00:09:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.902 00:09:01 -- host/auth.sh@44 -- # digest=sha512 00:24:31.902 00:09:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.902 00:09:01 -- host/auth.sh@44 -- # keyid=1 00:24:31.902 00:09:01 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:31.902 00:09:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:31.902 00:09:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:31.902 00:09:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:31.902 00:09:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:24:31.902 00:09:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.902 00:09:01 -- host/auth.sh@68 -- # digest=sha512 00:24:31.902 00:09:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:31.902 00:09:01 -- host/auth.sh@68 -- # keyid=1 00:24:31.902 00:09:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:31.902 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.902 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.902 00:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.902 00:09:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.902 00:09:01 -- nvmf/common.sh@717 -- # local ip 00:24:31.902 00:09:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.902 00:09:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.902 00:09:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.902 00:09:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.902 00:09:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.902 00:09:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.902 00:09:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.902 00:09:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.902 00:09:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.902 00:09:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:31.902 00:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.902 00:09:01 -- common/autotest_common.sh@10 -- # set +x 00:24:32.162 nvme0n1 00:24:32.162 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.162 00:09:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.162 00:09:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.162 00:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.162 00:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.162 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.162 00:09:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.162 00:09:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.162 00:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.162 00:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.162 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.162 00:09:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.162 00:09:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:32.162 00:09:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.162 00:09:02 -- host/auth.sh@44 -- # digest=sha512 00:24:32.162 00:09:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.162 00:09:02 -- host/auth.sh@44 -- # keyid=2 00:24:32.162 00:09:02 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:32.162 00:09:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:32.162 00:09:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:32.162 00:09:02 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:32.162 00:09:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:24:32.162 00:09:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.162 00:09:02 -- host/auth.sh@68 -- # digest=sha512 00:24:32.162 00:09:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:32.162 00:09:02 -- host/auth.sh@68 -- # keyid=2 00:24:32.162 00:09:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:32.162 00:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.162 00:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.162 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.162 00:09:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.162 00:09:02 -- nvmf/common.sh@717 -- # local ip 00:24:32.162 00:09:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.162 00:09:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.162 00:09:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.162 00:09:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.162 00:09:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:32.162 00:09:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.162 00:09:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:32.162 00:09:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:32.162 00:09:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:32.162 00:09:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:32.162 00:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.162 00:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.422 nvme0n1 00:24:32.422 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.422 00:09:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.422 00:09:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.422 00:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.422 00:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.422 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.422 00:09:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.422 00:09:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.422 00:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.422 00:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.422 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.422 00:09:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.422 00:09:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:32.422 00:09:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.422 00:09:02 -- host/auth.sh@44 -- # digest=sha512 00:24:32.422 00:09:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.422 00:09:02 -- host/auth.sh@44 -- # keyid=3 00:24:32.422 00:09:02 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:32.422 00:09:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:32.422 00:09:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:32.422 00:09:02 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:32.422 00:09:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:24:32.422 00:09:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.422 00:09:02 -- host/auth.sh@68 -- # digest=sha512 00:24:32.422 00:09:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:32.422 00:09:02 -- host/auth.sh@68 -- # keyid=3 00:24:32.422 00:09:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:32.422 00:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.422 00:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.422 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.422 00:09:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.422 00:09:02 -- nvmf/common.sh@717 -- # local ip 00:24:32.422 00:09:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.422 00:09:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.681 00:09:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.681 00:09:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.681 00:09:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:32.681 00:09:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.681 00:09:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:32.681 00:09:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:32.681 00:09:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:32.681 00:09:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:32.681 00:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.681 00:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.940 nvme0n1 00:24:32.940 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.940 00:09:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.940 00:09:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.940 00:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.940 00:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.940 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.940 00:09:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.940 00:09:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.940 00:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.940 00:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.940 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.940 00:09:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.940 00:09:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:32.940 00:09:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.940 00:09:02 -- host/auth.sh@44 -- # digest=sha512 00:24:32.940 00:09:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.940 00:09:02 -- host/auth.sh@44 -- # keyid=4 00:24:32.940 00:09:02 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:32.940 00:09:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:32.940 00:09:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:32.940 00:09:02 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:32.940 00:09:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:24:32.940 00:09:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.940 00:09:02 -- host/auth.sh@68 -- # digest=sha512 00:24:32.940 00:09:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:32.940 00:09:02 -- host/auth.sh@68 -- # keyid=4 00:24:32.940 00:09:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:32.940 00:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.940 00:09:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.940 00:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.940 00:09:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.940 00:09:02 -- nvmf/common.sh@717 -- # local ip 00:24:32.940 00:09:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.940 00:09:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.940 00:09:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.940 00:09:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.940 00:09:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:32.940 00:09:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.940 00:09:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:32.940 00:09:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:32.940 00:09:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:32.940 00:09:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:32.940 00:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.940 00:09:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.200 nvme0n1 00:24:33.200 00:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.200 00:09:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.200 00:09:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:33.200 00:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.200 00:09:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.200 00:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.200 00:09:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.200 00:09:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.200 00:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.200 00:09:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.200 00:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.200 00:09:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.200 00:09:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.200 00:09:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:33.200 00:09:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.200 00:09:03 -- host/auth.sh@44 -- # digest=sha512 00:24:33.200 00:09:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.200 00:09:03 -- host/auth.sh@44 -- # keyid=0 00:24:33.200 00:09:03 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:33.200 00:09:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:33.200 00:09:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:33.200 00:09:03 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:33.200 00:09:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:24:33.200 00:09:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.200 00:09:03 -- host/auth.sh@68 -- # digest=sha512 00:24:33.200 00:09:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:33.200 00:09:03 -- host/auth.sh@68 -- # keyid=0 00:24:33.200 00:09:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:33.200 00:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.200 00:09:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.200 00:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.200 00:09:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.200 00:09:03 -- nvmf/common.sh@717 -- # local ip 00:24:33.200 00:09:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.200 00:09:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.200 00:09:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.200 00:09:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.200 00:09:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:33.200 00:09:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.200 00:09:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:33.200 00:09:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:33.200 00:09:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:33.200 00:09:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:33.200 00:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.200 00:09:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.771 nvme0n1 00:24:33.771 00:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.771 00:09:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.771 00:09:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:33.771 00:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.771 00:09:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.771 00:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.771 00:09:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.771 00:09:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.771 00:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.771 00:09:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.771 00:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.771 00:09:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.771 00:09:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:33.771 00:09:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.771 00:09:03 -- host/auth.sh@44 -- # digest=sha512 00:24:33.771 00:09:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.771 00:09:03 -- host/auth.sh@44 -- # keyid=1 00:24:33.771 00:09:03 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:33.771 00:09:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:33.771 00:09:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:33.771 00:09:03 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:33.771 00:09:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:24:33.771 00:09:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.771 00:09:03 -- host/auth.sh@68 -- # digest=sha512 00:24:33.771 00:09:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:33.771 00:09:03 -- host/auth.sh@68 -- # keyid=1 00:24:33.771 00:09:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:33.771 00:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.771 00:09:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.771 00:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.771 00:09:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.771 00:09:03 -- nvmf/common.sh@717 -- # local ip 00:24:33.771 00:09:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.771 00:09:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.771 00:09:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.771 00:09:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.771 00:09:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:33.771 00:09:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.771 00:09:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:33.771 00:09:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:33.771 00:09:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:33.771 00:09:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:33.771 00:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.771 00:09:03 -- common/autotest_common.sh@10 -- # set +x 00:24:34.341 nvme0n1 00:24:34.341 00:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.341 00:09:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.341 00:09:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.341 00:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.342 00:09:04 -- common/autotest_common.sh@10 -- # set +x 00:24:34.342 00:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.342 00:09:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.342 00:09:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.342 00:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.342 00:09:04 -- common/autotest_common.sh@10 -- # set +x 00:24:34.342 00:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.342 00:09:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.342 00:09:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:34.342 00:09:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.342 00:09:04 -- host/auth.sh@44 -- # digest=sha512 00:24:34.342 00:09:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.342 00:09:04 -- host/auth.sh@44 -- # keyid=2 00:24:34.342 00:09:04 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:34.342 00:09:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:34.342 00:09:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:34.342 00:09:04 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:34.342 00:09:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:24:34.342 00:09:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.342 00:09:04 -- host/auth.sh@68 -- # digest=sha512 00:24:34.342 00:09:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:34.342 00:09:04 -- host/auth.sh@68 -- # keyid=2 00:24:34.342 00:09:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:34.342 00:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.342 00:09:04 -- common/autotest_common.sh@10 -- # set +x 00:24:34.342 00:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.342 00:09:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.342 00:09:04 -- nvmf/common.sh@717 -- # local ip 00:24:34.342 00:09:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.342 00:09:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.342 00:09:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.342 00:09:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.342 00:09:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.342 00:09:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.342 00:09:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.342 00:09:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.342 00:09:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.342 00:09:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:34.342 00:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.342 00:09:04 -- common/autotest_common.sh@10 -- # set +x 00:24:34.935 nvme0n1 00:24:34.935 00:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.935 00:09:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.935 00:09:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.935 00:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.935 00:09:04 -- common/autotest_common.sh@10 -- # set +x 00:24:34.935 00:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.935 00:09:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.935 00:09:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.935 00:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.935 00:09:04 -- common/autotest_common.sh@10 -- # set +x 00:24:34.935 00:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.935 00:09:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.935 00:09:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:34.935 00:09:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.935 00:09:04 -- host/auth.sh@44 -- # digest=sha512 00:24:34.935 00:09:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.935 00:09:04 -- host/auth.sh@44 -- # keyid=3 00:24:34.935 00:09:04 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:34.935 00:09:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:34.935 00:09:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:34.935 00:09:04 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:34.935 00:09:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:24:34.935 00:09:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.935 00:09:04 -- host/auth.sh@68 -- # digest=sha512 00:24:34.935 00:09:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:34.935 00:09:04 -- host/auth.sh@68 -- # keyid=3 00:24:34.935 00:09:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:34.935 00:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.935 00:09:04 -- common/autotest_common.sh@10 -- # set +x 00:24:34.935 00:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.935 00:09:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.935 00:09:04 -- nvmf/common.sh@717 -- # local ip 00:24:34.935 00:09:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.935 00:09:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.935 00:09:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.935 00:09:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.935 00:09:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.935 00:09:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.935 00:09:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.935 00:09:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.935 00:09:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.935 00:09:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:34.935 00:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.935 00:09:04 -- common/autotest_common.sh@10 -- # set +x 00:24:35.195 nvme0n1 00:24:35.195 00:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.195 00:09:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.195 00:09:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.195 00:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.195 00:09:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.195 00:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.454 00:09:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.454 00:09:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.454 00:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.454 00:09:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.454 00:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.454 00:09:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.454 00:09:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:35.454 00:09:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.454 00:09:05 -- host/auth.sh@44 -- # digest=sha512 00:24:35.454 00:09:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.454 00:09:05 -- host/auth.sh@44 -- # keyid=4 00:24:35.454 00:09:05 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:35.454 00:09:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.454 00:09:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:35.454 00:09:05 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:35.454 00:09:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:24:35.454 00:09:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.454 00:09:05 -- host/auth.sh@68 -- # digest=sha512 00:24:35.454 00:09:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:35.454 00:09:05 -- host/auth.sh@68 -- # keyid=4 00:24:35.454 00:09:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:35.454 00:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.454 00:09:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.454 00:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.454 00:09:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.454 00:09:05 -- nvmf/common.sh@717 -- # local ip 00:24:35.454 00:09:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.454 00:09:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.454 00:09:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.454 00:09:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.454 00:09:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.454 00:09:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.454 00:09:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.454 00:09:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.454 00:09:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.454 00:09:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.454 00:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.454 00:09:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.714 nvme0n1 00:24:35.714 00:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.714 00:09:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.714 00:09:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.714 00:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.714 00:09:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.714 00:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.973 00:09:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.973 00:09:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.973 00:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.973 00:09:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.973 00:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.973 00:09:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.973 00:09:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.973 00:09:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:35.973 00:09:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.973 00:09:05 -- host/auth.sh@44 -- # digest=sha512 00:24:35.973 00:09:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.973 00:09:05 -- host/auth.sh@44 -- # keyid=0 00:24:35.973 00:09:05 -- host/auth.sh@45 -- # key=DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:35.973 00:09:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.973 00:09:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:35.973 00:09:05 -- host/auth.sh@49 -- # echo DHHC-1:00:NGY4NTZiN2U1NmQ3NTYzNmI0MGQzMWE4YWM1MjVhYmUs8CfM: 00:24:35.973 00:09:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:24:35.973 00:09:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.973 00:09:05 -- host/auth.sh@68 -- # digest=sha512 00:24:35.973 00:09:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:35.973 00:09:05 -- host/auth.sh@68 -- # keyid=0 00:24:35.973 00:09:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:35.973 00:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.973 00:09:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.973 00:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.973 00:09:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.973 00:09:05 -- nvmf/common.sh@717 -- # local ip 00:24:35.973 00:09:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.973 00:09:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.973 00:09:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.973 00:09:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.973 00:09:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.973 00:09:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.973 00:09:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.974 00:09:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.974 00:09:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.974 00:09:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:35.974 00:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.974 00:09:05 -- common/autotest_common.sh@10 -- # set +x 00:24:36.542 nvme0n1 00:24:36.542 00:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.542 00:09:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.542 00:09:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.542 00:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.542 00:09:06 -- common/autotest_common.sh@10 -- # set +x 00:24:36.542 00:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.800 00:09:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.800 00:09:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.800 00:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.800 00:09:06 -- common/autotest_common.sh@10 -- # set +x 00:24:36.800 00:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.800 00:09:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.801 00:09:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:36.801 00:09:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.801 00:09:06 -- host/auth.sh@44 -- # digest=sha512 00:24:36.801 00:09:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.801 00:09:06 -- host/auth.sh@44 -- # keyid=1 00:24:36.801 00:09:06 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:36.801 00:09:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.801 00:09:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:36.801 00:09:06 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:36.801 00:09:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:24:36.801 00:09:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.801 00:09:06 -- host/auth.sh@68 -- # digest=sha512 00:24:36.801 00:09:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:36.801 00:09:06 -- host/auth.sh@68 -- # keyid=1 00:24:36.801 00:09:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:36.801 00:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.801 00:09:06 -- common/autotest_common.sh@10 -- # set +x 00:24:36.801 00:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.801 00:09:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.801 00:09:06 -- nvmf/common.sh@717 -- # local ip 00:24:36.801 00:09:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.801 00:09:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.801 00:09:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.801 00:09:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.801 00:09:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.801 00:09:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.801 00:09:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.801 00:09:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.801 00:09:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.801 00:09:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:36.801 00:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.801 00:09:06 -- common/autotest_common.sh@10 -- # set +x 00:24:37.370 nvme0n1 00:24:37.370 00:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.370 00:09:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.370 00:09:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.370 00:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.370 00:09:07 -- common/autotest_common.sh@10 -- # set +x 00:24:37.370 00:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.630 00:09:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.630 00:09:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.630 00:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.630 00:09:07 -- common/autotest_common.sh@10 -- # set +x 00:24:37.630 00:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.630 00:09:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.630 00:09:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:37.630 00:09:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.630 00:09:07 -- host/auth.sh@44 -- # digest=sha512 00:24:37.630 00:09:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:37.630 00:09:07 -- host/auth.sh@44 -- # keyid=2 00:24:37.630 00:09:07 -- host/auth.sh@45 -- # key=DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:37.630 00:09:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.630 00:09:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:37.630 00:09:07 -- host/auth.sh@49 -- # echo DHHC-1:01:OWEwNzdkZjQ1YjRkNzk1OWNmNmZlZTc4Y2YzZDY3MTiYrJHZ: 00:24:37.630 00:09:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:24:37.630 00:09:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.630 00:09:07 -- host/auth.sh@68 -- # digest=sha512 00:24:37.630 00:09:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:37.630 00:09:07 -- host/auth.sh@68 -- # keyid=2 00:24:37.630 00:09:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:37.630 00:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.630 00:09:07 -- common/autotest_common.sh@10 -- # set +x 00:24:37.630 00:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.630 00:09:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.630 00:09:07 -- nvmf/common.sh@717 -- # local ip 00:24:37.630 00:09:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.630 00:09:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.630 00:09:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.630 00:09:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.630 00:09:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.630 00:09:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.630 00:09:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.630 00:09:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.630 00:09:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.630 00:09:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:37.630 00:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.630 00:09:07 -- common/autotest_common.sh@10 -- # set +x 00:24:38.200 nvme0n1 00:24:38.200 00:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.200 00:09:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.200 00:09:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.200 00:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.200 00:09:08 -- common/autotest_common.sh@10 -- # set +x 00:24:38.200 00:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.461 00:09:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.461 00:09:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.461 00:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.461 00:09:08 -- common/autotest_common.sh@10 -- # set +x 00:24:38.461 00:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.461 00:09:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.461 00:09:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:38.461 00:09:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.461 00:09:08 -- host/auth.sh@44 -- # digest=sha512 00:24:38.461 00:09:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.461 00:09:08 -- host/auth.sh@44 -- # keyid=3 00:24:38.461 00:09:08 -- host/auth.sh@45 -- # key=DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:38.461 00:09:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.461 00:09:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:38.461 00:09:08 -- host/auth.sh@49 -- # echo DHHC-1:02:NDBhM2QxMTQxZDdiZjVhNDQyOTAyYjc1ZjYyNWJjZjljNzg4ZGZlYmIyZTAwYzY3mKMR9A==: 00:24:38.461 00:09:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:24:38.461 00:09:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.461 00:09:08 -- host/auth.sh@68 -- # digest=sha512 00:24:38.461 00:09:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:38.461 00:09:08 -- host/auth.sh@68 -- # keyid=3 00:24:38.461 00:09:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:38.461 00:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.461 00:09:08 -- common/autotest_common.sh@10 -- # set +x 00:24:38.461 00:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.461 00:09:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.461 00:09:08 -- nvmf/common.sh@717 -- # local ip 00:24:38.461 00:09:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.461 00:09:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.461 00:09:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.461 00:09:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.461 00:09:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.461 00:09:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.461 00:09:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.461 00:09:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.461 00:09:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.461 00:09:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:38.461 00:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.461 00:09:08 -- common/autotest_common.sh@10 -- # set +x 00:24:39.034 nvme0n1 00:24:39.034 00:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.034 00:09:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.034 00:09:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.034 00:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.034 00:09:09 -- common/autotest_common.sh@10 -- # set +x 00:24:39.034 00:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.295 00:09:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.295 00:09:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.295 00:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.295 00:09:09 -- common/autotest_common.sh@10 -- # set +x 00:24:39.295 00:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.295 00:09:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.295 00:09:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:39.295 00:09:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.295 00:09:09 -- host/auth.sh@44 -- # digest=sha512 00:24:39.296 00:09:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.296 00:09:09 -- host/auth.sh@44 -- # keyid=4 00:24:39.296 00:09:09 -- host/auth.sh@45 -- # key=DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:39.296 00:09:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:39.296 00:09:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:39.296 00:09:09 -- host/auth.sh@49 -- # echo DHHC-1:03:NzhjYzEwMzRlYjdiN2FjMmY2ZTllYjE5YzhhMGQ3NTQ1ODc5ODg0ZDcwZjgxNTZmNTFhYzAzZDgxMmU0NGFiNLsDpV4=: 00:24:39.296 00:09:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:24:39.296 00:09:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.296 00:09:09 -- host/auth.sh@68 -- # digest=sha512 00:24:39.296 00:09:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:39.296 00:09:09 -- host/auth.sh@68 -- # keyid=4 00:24:39.296 00:09:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:39.296 00:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.296 00:09:09 -- common/autotest_common.sh@10 -- # set +x 00:24:39.296 00:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.296 00:09:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.296 00:09:09 -- nvmf/common.sh@717 -- # local ip 00:24:39.296 00:09:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.296 00:09:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.296 00:09:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.296 00:09:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.296 00:09:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:39.296 00:09:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.296 00:09:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:39.296 00:09:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:39.296 00:09:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:39.296 00:09:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:39.296 00:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.296 00:09:09 -- common/autotest_common.sh@10 -- # set +x 00:24:39.877 nvme0n1 00:24:39.878 00:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.878 00:09:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.878 00:09:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.878 00:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.878 00:09:10 -- common/autotest_common.sh@10 -- # set +x 00:24:39.878 00:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.141 00:09:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.141 00:09:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.141 00:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.141 00:09:10 -- common/autotest_common.sh@10 -- # set +x 00:24:40.141 00:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.141 00:09:10 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:40.141 00:09:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.141 00:09:10 -- host/auth.sh@44 -- # digest=sha256 00:24:40.141 00:09:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:40.141 00:09:10 -- host/auth.sh@44 -- # keyid=1 00:24:40.141 00:09:10 -- host/auth.sh@45 -- # key=DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:40.141 00:09:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:40.141 00:09:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:40.141 00:09:10 -- host/auth.sh@49 -- # echo DHHC-1:00:NDllNWVjZTA4ZDExMzhiNTM3MjdiZjRmNWM2MGRiOTM0NGVlYjk1ZTE1ODVkMDNhaQXhMA==: 00:24:40.141 00:09:10 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:40.141 00:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.141 00:09:10 -- common/autotest_common.sh@10 -- # set +x 00:24:40.141 00:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.141 00:09:10 -- host/auth.sh@119 -- # get_main_ns_ip 00:24:40.141 00:09:10 -- nvmf/common.sh@717 -- # local ip 00:24:40.141 00:09:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.141 00:09:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.141 00:09:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.141 00:09:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.141 00:09:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.141 00:09:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.141 00:09:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.141 00:09:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.141 00:09:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.141 00:09:10 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:40.141 00:09:10 -- common/autotest_common.sh@638 -- # local es=0 00:24:40.141 00:09:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:40.141 00:09:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:40.141 00:09:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.141 00:09:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:40.141 00:09:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.141 00:09:10 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:40.141 00:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.141 00:09:10 -- common/autotest_common.sh@10 -- # set +x 00:24:40.141 request: 00:24:40.141 { 00:24:40.141 "name": "nvme0", 00:24:40.141 "trtype": "tcp", 00:24:40.141 "traddr": "10.0.0.1", 00:24:40.141 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:40.141 "adrfam": "ipv4", 00:24:40.141 "trsvcid": "4420", 00:24:40.141 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:40.141 "method": "bdev_nvme_attach_controller", 00:24:40.141 "req_id": 1 00:24:40.141 } 00:24:40.141 Got JSON-RPC error response 00:24:40.141 response: 00:24:40.141 { 00:24:40.141 "code": -32602, 00:24:40.141 "message": "Invalid parameters" 00:24:40.141 } 00:24:40.141 00:09:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:40.141 00:09:10 -- common/autotest_common.sh@641 -- # es=1 00:24:40.141 00:09:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:40.141 00:09:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:40.141 00:09:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:40.141 00:09:10 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.141 00:09:10 -- host/auth.sh@121 -- # jq length 00:24:40.141 00:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.141 00:09:10 -- common/autotest_common.sh@10 -- # set +x 00:24:40.141 00:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.141 00:09:10 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:24:40.141 00:09:10 -- host/auth.sh@124 -- # get_main_ns_ip 00:24:40.141 00:09:10 -- nvmf/common.sh@717 -- # local ip 00:24:40.141 00:09:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.141 00:09:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.141 00:09:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.141 00:09:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.141 00:09:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.141 00:09:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.141 00:09:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.141 00:09:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.141 00:09:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.141 00:09:10 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:40.141 00:09:10 -- common/autotest_common.sh@638 -- # local es=0 00:24:40.141 00:09:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:40.141 00:09:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:40.141 00:09:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.141 00:09:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:40.141 00:09:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.141 00:09:10 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:40.141 00:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.141 00:09:10 -- common/autotest_common.sh@10 -- # set +x 00:24:40.141 request: 00:24:40.142 { 00:24:40.142 "name": "nvme0", 00:24:40.142 "trtype": "tcp", 00:24:40.142 "traddr": "10.0.0.1", 00:24:40.142 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:40.142 "adrfam": "ipv4", 00:24:40.142 "trsvcid": "4420", 00:24:40.142 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:40.142 "dhchap_key": "key2", 00:24:40.142 "method": "bdev_nvme_attach_controller", 00:24:40.142 "req_id": 1 00:24:40.142 } 00:24:40.142 Got JSON-RPC error response 00:24:40.142 response: 00:24:40.142 { 00:24:40.142 "code": -32602, 00:24:40.142 "message": "Invalid parameters" 00:24:40.142 } 00:24:40.142 00:09:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:40.142 00:09:10 -- common/autotest_common.sh@641 -- # es=1 00:24:40.142 00:09:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:40.142 00:09:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:40.142 00:09:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:40.142 00:09:10 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.142 00:09:10 -- host/auth.sh@127 -- # jq length 00:24:40.142 00:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.142 00:09:10 -- common/autotest_common.sh@10 -- # set +x 00:24:40.142 00:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.402 00:09:10 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:24:40.402 00:09:10 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:40.402 00:09:10 -- host/auth.sh@130 -- # cleanup 00:24:40.402 00:09:10 -- host/auth.sh@24 -- # nvmftestfini 00:24:40.402 00:09:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:40.402 00:09:10 -- nvmf/common.sh@117 -- # sync 00:24:40.402 00:09:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:40.402 00:09:10 -- nvmf/common.sh@120 -- # set +e 00:24:40.402 00:09:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:40.402 00:09:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:40.402 rmmod nvme_tcp 00:24:40.402 rmmod nvme_fabrics 00:24:40.402 00:09:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:40.402 00:09:10 -- nvmf/common.sh@124 -- # set -e 00:24:40.402 00:09:10 -- nvmf/common.sh@125 -- # return 0 00:24:40.402 00:09:10 -- nvmf/common.sh@478 -- # '[' -n 518816 ']' 00:24:40.402 00:09:10 -- nvmf/common.sh@479 -- # killprocess 518816 00:24:40.402 00:09:10 -- common/autotest_common.sh@936 -- # '[' -z 518816 ']' 00:24:40.402 00:09:10 -- common/autotest_common.sh@940 -- # kill -0 518816 00:24:40.402 00:09:10 -- common/autotest_common.sh@941 -- # uname 00:24:40.402 00:09:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:40.402 00:09:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 518816 00:24:40.402 00:09:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:40.402 00:09:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:40.402 00:09:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 518816' 00:24:40.402 killing process with pid 518816 00:24:40.402 00:09:10 -- common/autotest_common.sh@955 -- # kill 518816 00:24:40.402 00:09:10 -- common/autotest_common.sh@960 -- # wait 518816 00:24:40.402 00:09:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:40.402 00:09:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:40.402 00:09:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:40.402 00:09:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:40.402 00:09:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:40.402 00:09:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.402 00:09:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.402 00:09:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.947 00:09:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:42.947 00:09:12 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:42.947 00:09:12 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:42.947 00:09:12 -- host/auth.sh@27 -- # clean_kernel_target 00:24:42.947 00:09:12 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:42.947 00:09:12 -- nvmf/common.sh@675 -- # echo 0 00:24:42.947 00:09:12 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:42.947 00:09:12 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:42.947 00:09:12 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:42.947 00:09:12 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:42.947 00:09:12 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:42.947 00:09:12 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:42.947 00:09:12 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:46.249 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:46.249 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:24:46.511 00:09:16 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eEH /tmp/spdk.key-null.NdY /tmp/spdk.key-sha256.XEF /tmp/spdk.key-sha384.wNu /tmp/spdk.key-sha512.m2I /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:46.511 00:09:16 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:50.724 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:24:50.724 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:24:50.724 00:24:50.724 real 0m57.862s 00:24:50.724 user 0m51.125s 00:24:50.724 sys 0m14.996s 00:24:50.724 00:09:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:50.724 00:09:20 -- common/autotest_common.sh@10 -- # set +x 00:24:50.724 ************************************ 00:24:50.724 END TEST nvmf_auth 00:24:50.724 ************************************ 00:24:50.724 00:09:20 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:24:50.724 00:09:20 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:50.724 00:09:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:50.724 00:09:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:50.724 00:09:20 -- common/autotest_common.sh@10 -- # set +x 00:24:50.724 ************************************ 00:24:50.724 START TEST nvmf_digest 00:24:50.724 ************************************ 00:24:50.724 00:09:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:50.724 * Looking for test storage... 00:24:50.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.724 00:09:20 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.724 00:09:20 -- nvmf/common.sh@7 -- # uname -s 00:24:50.724 00:09:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.724 00:09:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.724 00:09:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.724 00:09:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.725 00:09:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.725 00:09:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.725 00:09:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.725 00:09:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.725 00:09:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.725 00:09:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.725 00:09:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:50.725 00:09:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:50.725 00:09:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.725 00:09:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.725 00:09:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.725 00:09:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.725 00:09:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.725 00:09:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.725 00:09:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.725 00:09:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.725 00:09:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.725 00:09:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.725 00:09:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.725 00:09:20 -- paths/export.sh@5 -- # export PATH 00:24:50.725 00:09:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.725 00:09:20 -- nvmf/common.sh@47 -- # : 0 00:24:50.725 00:09:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.725 00:09:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.725 00:09:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.725 00:09:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.725 00:09:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.725 00:09:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.725 00:09:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.725 00:09:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.725 00:09:20 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:50.725 00:09:20 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:50.725 00:09:20 -- host/digest.sh@16 -- # runtime=2 00:24:50.725 00:09:20 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:50.725 00:09:20 -- host/digest.sh@138 -- # nvmftestinit 00:24:50.725 00:09:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:50.725 00:09:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.725 00:09:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:50.725 00:09:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:50.725 00:09:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:50.725 00:09:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.725 00:09:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.725 00:09:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.725 00:09:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:50.725 00:09:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:50.725 00:09:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.725 00:09:20 -- common/autotest_common.sh@10 -- # set +x 00:24:58.869 00:09:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:58.869 00:09:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:58.869 00:09:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:58.869 00:09:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:58.869 00:09:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:58.869 00:09:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:58.869 00:09:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:58.869 00:09:27 -- nvmf/common.sh@295 -- # net_devs=() 00:24:58.869 00:09:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:58.869 00:09:27 -- nvmf/common.sh@296 -- # e810=() 00:24:58.869 00:09:27 -- nvmf/common.sh@296 -- # local -ga e810 00:24:58.869 00:09:27 -- nvmf/common.sh@297 -- # x722=() 00:24:58.869 00:09:27 -- nvmf/common.sh@297 -- # local -ga x722 00:24:58.869 00:09:27 -- nvmf/common.sh@298 -- # mlx=() 00:24:58.869 00:09:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:58.869 00:09:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.869 00:09:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.869 00:09:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.869 00:09:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.869 00:09:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.869 00:09:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.869 00:09:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.869 00:09:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.869 00:09:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.869 00:09:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.869 00:09:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.869 00:09:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:58.869 00:09:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:58.870 00:09:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:58.870 00:09:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.870 00:09:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:58.870 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:58.870 00:09:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.870 00:09:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:58.870 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:58.870 00:09:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:58.870 00:09:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.870 00:09:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.870 00:09:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:58.870 00:09:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.870 00:09:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:58.870 Found net devices under 0000:31:00.0: cvl_0_0 00:24:58.870 00:09:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.870 00:09:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.870 00:09:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.870 00:09:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:58.870 00:09:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.870 00:09:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:58.870 Found net devices under 0000:31:00.1: cvl_0_1 00:24:58.870 00:09:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.870 00:09:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:58.870 00:09:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:58.870 00:09:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:58.870 00:09:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.870 00:09:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.870 00:09:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.870 00:09:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:58.870 00:09:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.870 00:09:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.870 00:09:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:58.870 00:09:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.870 00:09:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.870 00:09:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:58.870 00:09:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:58.870 00:09:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.870 00:09:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.870 00:09:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.870 00:09:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.870 00:09:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:58.870 00:09:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.870 00:09:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.870 00:09:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.870 00:09:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:58.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:24:58.870 00:24:58.870 --- 10.0.0.2 ping statistics --- 00:24:58.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.870 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:24:58.870 00:09:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:24:58.870 00:24:58.870 --- 10.0.0.1 ping statistics --- 00:24:58.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.870 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:24:58.870 00:09:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.870 00:09:27 -- nvmf/common.sh@411 -- # return 0 00:24:58.870 00:09:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:58.870 00:09:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.870 00:09:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:58.870 00:09:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.870 00:09:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:58.870 00:09:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:58.870 00:09:27 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:58.870 00:09:27 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:58.870 00:09:27 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:58.870 00:09:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:58.870 00:09:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:58.870 00:09:27 -- common/autotest_common.sh@10 -- # set +x 00:24:58.870 ************************************ 00:24:58.870 START TEST nvmf_digest_clean 00:24:58.870 ************************************ 00:24:58.870 00:09:28 -- common/autotest_common.sh@1111 -- # run_digest 00:24:58.870 00:09:28 -- host/digest.sh@120 -- # local dsa_initiator 00:24:58.870 00:09:28 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:58.870 00:09:28 -- host/digest.sh@121 -- # dsa_initiator=false 00:24:58.870 00:09:28 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:58.870 00:09:28 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:58.870 00:09:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:58.870 00:09:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:58.870 00:09:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.870 00:09:28 -- nvmf/common.sh@470 -- # nvmfpid=536132 00:24:58.870 00:09:28 -- nvmf/common.sh@471 -- # waitforlisten 536132 00:24:58.870 00:09:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:58.870 00:09:28 -- common/autotest_common.sh@817 -- # '[' -z 536132 ']' 00:24:58.870 00:09:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.870 00:09:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:58.870 00:09:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.870 00:09:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:58.870 00:09:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.870 [2024-04-27 00:09:28.156865] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:24:58.870 [2024-04-27 00:09:28.156911] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.870 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.871 [2024-04-27 00:09:28.222857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.871 [2024-04-27 00:09:28.288874] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.871 [2024-04-27 00:09:28.288909] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.871 [2024-04-27 00:09:28.288917] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.871 [2024-04-27 00:09:28.288923] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.871 [2024-04-27 00:09:28.288929] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.871 [2024-04-27 00:09:28.288952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.871 00:09:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:58.871 00:09:28 -- common/autotest_common.sh@850 -- # return 0 00:24:58.871 00:09:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:58.871 00:09:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:58.871 00:09:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.871 00:09:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.871 00:09:28 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:58.871 00:09:28 -- host/digest.sh@126 -- # common_target_config 00:24:58.871 00:09:28 -- host/digest.sh@43 -- # rpc_cmd 00:24:58.871 00:09:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.871 00:09:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.871 null0 00:24:58.871 [2024-04-27 00:09:29.059657] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.871 [2024-04-27 00:09:29.083852] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.871 00:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.871 00:09:29 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:59.132 00:09:29 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:59.132 00:09:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:59.132 00:09:29 -- host/digest.sh@80 -- # rw=randread 00:24:59.132 00:09:29 -- host/digest.sh@80 -- # bs=4096 00:24:59.132 00:09:29 -- host/digest.sh@80 -- # qd=128 00:24:59.132 00:09:29 -- host/digest.sh@80 -- # scan_dsa=false 00:24:59.132 00:09:29 -- host/digest.sh@83 -- # bperfpid=536365 00:24:59.132 00:09:29 -- host/digest.sh@84 -- # waitforlisten 536365 /var/tmp/bperf.sock 00:24:59.132 00:09:29 -- common/autotest_common.sh@817 -- # '[' -z 536365 ']' 00:24:59.132 00:09:29 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:59.132 00:09:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:59.132 00:09:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:59.132 00:09:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:59.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:59.132 00:09:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:59.132 00:09:29 -- common/autotest_common.sh@10 -- # set +x 00:24:59.132 [2024-04-27 00:09:29.137936] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:24:59.132 [2024-04-27 00:09:29.137982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid536365 ] 00:24:59.132 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.132 [2024-04-27 00:09:29.196740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.132 [2024-04-27 00:09:29.261017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.703 00:09:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:59.703 00:09:29 -- common/autotest_common.sh@850 -- # return 0 00:24:59.703 00:09:29 -- host/digest.sh@86 -- # false 00:24:59.703 00:09:29 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:59.703 00:09:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:59.963 00:09:30 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.963 00:09:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:00.534 nvme0n1 00:25:00.534 00:09:30 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:00.534 00:09:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:00.534 Running I/O for 2 seconds... 00:25:02.446 00:25:02.446 Latency(us) 00:25:02.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.446 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:02.446 nvme0n1 : 2.00 20882.38 81.57 0.00 0.00 6123.50 2976.43 20097.71 00:25:02.446 =================================================================================================================== 00:25:02.446 Total : 20882.38 81.57 0.00 0.00 6123.50 2976.43 20097.71 00:25:02.446 0 00:25:02.446 00:09:32 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:02.446 00:09:32 -- host/digest.sh@93 -- # get_accel_stats 00:25:02.446 00:09:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:02.446 | select(.opcode=="crc32c") 00:25:02.446 | "\(.module_name) \(.executed)"' 00:25:02.446 00:09:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:02.446 00:09:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:02.710 00:09:32 -- host/digest.sh@94 -- # false 00:25:02.710 00:09:32 -- host/digest.sh@94 -- # exp_module=software 00:25:02.710 00:09:32 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:02.710 00:09:32 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:02.710 00:09:32 -- host/digest.sh@98 -- # killprocess 536365 00:25:02.710 00:09:32 -- common/autotest_common.sh@936 -- # '[' -z 536365 ']' 00:25:02.710 00:09:32 -- common/autotest_common.sh@940 -- # kill -0 536365 00:25:02.710 00:09:32 -- common/autotest_common.sh@941 -- # uname 00:25:02.710 00:09:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:02.710 00:09:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 536365 00:25:02.710 00:09:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:02.710 00:09:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:02.710 00:09:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 536365' 00:25:02.710 killing process with pid 536365 00:25:02.710 00:09:32 -- common/autotest_common.sh@955 -- # kill 536365 00:25:02.710 Received shutdown signal, test time was about 2.000000 seconds 00:25:02.710 00:25:02.710 Latency(us) 00:25:02.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.710 =================================================================================================================== 00:25:02.710 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.710 00:09:32 -- common/autotest_common.sh@960 -- # wait 536365 00:25:03.015 00:09:32 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:03.015 00:09:32 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:03.015 00:09:32 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:03.015 00:09:32 -- host/digest.sh@80 -- # rw=randread 00:25:03.015 00:09:32 -- host/digest.sh@80 -- # bs=131072 00:25:03.015 00:09:32 -- host/digest.sh@80 -- # qd=16 00:25:03.015 00:09:32 -- host/digest.sh@80 -- # scan_dsa=false 00:25:03.015 00:09:32 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:03.015 00:09:32 -- host/digest.sh@83 -- # bperfpid=537164 00:25:03.015 00:09:32 -- host/digest.sh@84 -- # waitforlisten 537164 /var/tmp/bperf.sock 00:25:03.015 00:09:32 -- common/autotest_common.sh@817 -- # '[' -z 537164 ']' 00:25:03.015 00:09:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:03.015 00:09:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:03.015 00:09:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:03.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:03.015 00:09:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:03.016 00:09:32 -- common/autotest_common.sh@10 -- # set +x 00:25:03.016 [2024-04-27 00:09:32.954018] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:03.016 [2024-04-27 00:09:32.954066] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid537164 ] 00:25:03.016 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:03.016 Zero copy mechanism will not be used. 00:25:03.016 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.016 [2024-04-27 00:09:33.011121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.016 [2024-04-27 00:09:33.074152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.016 00:09:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:03.016 00:09:33 -- common/autotest_common.sh@850 -- # return 0 00:25:03.016 00:09:33 -- host/digest.sh@86 -- # false 00:25:03.016 00:09:33 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:03.016 00:09:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:03.279 00:09:33 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.279 00:09:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.540 nvme0n1 00:25:03.540 00:09:33 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:03.540 00:09:33 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:03.540 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:03.540 Zero copy mechanism will not be used. 00:25:03.540 Running I/O for 2 seconds... 00:25:05.457 00:25:05.457 Latency(us) 00:25:05.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.457 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:05.457 nvme0n1 : 2.01 3309.21 413.65 0.00 0.00 4830.84 1276.59 15182.51 00:25:05.457 =================================================================================================================== 00:25:05.457 Total : 3309.21 413.65 0.00 0.00 4830.84 1276.59 15182.51 00:25:05.457 0 00:25:05.457 00:09:35 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:05.457 00:09:35 -- host/digest.sh@93 -- # get_accel_stats 00:25:05.457 00:09:35 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:05.457 00:09:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:05.457 00:09:35 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:05.457 | select(.opcode=="crc32c") 00:25:05.457 | "\(.module_name) \(.executed)"' 00:25:05.719 00:09:35 -- host/digest.sh@94 -- # false 00:25:05.719 00:09:35 -- host/digest.sh@94 -- # exp_module=software 00:25:05.719 00:09:35 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:05.719 00:09:35 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:05.719 00:09:35 -- host/digest.sh@98 -- # killprocess 537164 00:25:05.719 00:09:35 -- common/autotest_common.sh@936 -- # '[' -z 537164 ']' 00:25:05.719 00:09:35 -- common/autotest_common.sh@940 -- # kill -0 537164 00:25:05.719 00:09:35 -- common/autotest_common.sh@941 -- # uname 00:25:05.719 00:09:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:05.719 00:09:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 537164 00:25:05.719 00:09:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:05.719 00:09:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:05.719 00:09:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 537164' 00:25:05.719 killing process with pid 537164 00:25:05.719 00:09:35 -- common/autotest_common.sh@955 -- # kill 537164 00:25:05.719 Received shutdown signal, test time was about 2.000000 seconds 00:25:05.719 00:25:05.719 Latency(us) 00:25:05.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.719 =================================================================================================================== 00:25:05.719 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.719 00:09:35 -- common/autotest_common.sh@960 -- # wait 537164 00:25:05.980 00:09:35 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:05.980 00:09:35 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:05.980 00:09:35 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:05.980 00:09:35 -- host/digest.sh@80 -- # rw=randwrite 00:25:05.980 00:09:35 -- host/digest.sh@80 -- # bs=4096 00:25:05.980 00:09:35 -- host/digest.sh@80 -- # qd=128 00:25:05.980 00:09:35 -- host/digest.sh@80 -- # scan_dsa=false 00:25:05.980 00:09:35 -- host/digest.sh@83 -- # bperfpid=537686 00:25:05.980 00:09:35 -- host/digest.sh@84 -- # waitforlisten 537686 /var/tmp/bperf.sock 00:25:05.980 00:09:35 -- common/autotest_common.sh@817 -- # '[' -z 537686 ']' 00:25:05.980 00:09:35 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:05.980 00:09:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:05.980 00:09:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:05.980 00:09:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:05.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:05.980 00:09:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:05.980 00:09:36 -- common/autotest_common.sh@10 -- # set +x 00:25:05.980 [2024-04-27 00:09:36.046396] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:05.980 [2024-04-27 00:09:36.046448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid537686 ] 00:25:05.980 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.980 [2024-04-27 00:09:36.106066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.980 [2024-04-27 00:09:36.168554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.923 00:09:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:06.923 00:09:36 -- common/autotest_common.sh@850 -- # return 0 00:25:06.923 00:09:36 -- host/digest.sh@86 -- # false 00:25:06.923 00:09:36 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:06.923 00:09:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:06.923 00:09:37 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.923 00:09:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.184 nvme0n1 00:25:07.184 00:09:37 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:07.184 00:09:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:07.184 Running I/O for 2 seconds... 00:25:09.098 00:25:09.099 Latency(us) 00:25:09.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.099 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:09.099 nvme0n1 : 2.01 22483.93 87.83 0.00 0.00 5684.20 3003.73 17257.81 00:25:09.099 =================================================================================================================== 00:25:09.099 Total : 22483.93 87.83 0.00 0.00 5684.20 3003.73 17257.81 00:25:09.099 0 00:25:09.361 00:09:39 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:09.361 00:09:39 -- host/digest.sh@93 -- # get_accel_stats 00:25:09.361 00:09:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:09.361 00:09:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:09.361 | select(.opcode=="crc32c") 00:25:09.361 | "\(.module_name) \(.executed)"' 00:25:09.361 00:09:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:09.361 00:09:39 -- host/digest.sh@94 -- # false 00:25:09.361 00:09:39 -- host/digest.sh@94 -- # exp_module=software 00:25:09.361 00:09:39 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:09.361 00:09:39 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:09.361 00:09:39 -- host/digest.sh@98 -- # killprocess 537686 00:25:09.361 00:09:39 -- common/autotest_common.sh@936 -- # '[' -z 537686 ']' 00:25:09.361 00:09:39 -- common/autotest_common.sh@940 -- # kill -0 537686 00:25:09.361 00:09:39 -- common/autotest_common.sh@941 -- # uname 00:25:09.361 00:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:09.361 00:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 537686 00:25:09.361 00:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:09.361 00:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:09.361 00:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 537686' 00:25:09.361 killing process with pid 537686 00:25:09.361 00:09:39 -- common/autotest_common.sh@955 -- # kill 537686 00:25:09.361 Received shutdown signal, test time was about 2.000000 seconds 00:25:09.361 00:25:09.361 Latency(us) 00:25:09.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.361 =================================================================================================================== 00:25:09.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.361 00:09:39 -- common/autotest_common.sh@960 -- # wait 537686 00:25:09.622 00:09:39 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:09.622 00:09:39 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:09.622 00:09:39 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:09.622 00:09:39 -- host/digest.sh@80 -- # rw=randwrite 00:25:09.622 00:09:39 -- host/digest.sh@80 -- # bs=131072 00:25:09.622 00:09:39 -- host/digest.sh@80 -- # qd=16 00:25:09.622 00:09:39 -- host/digest.sh@80 -- # scan_dsa=false 00:25:09.622 00:09:39 -- host/digest.sh@83 -- # bperfpid=538432 00:25:09.622 00:09:39 -- host/digest.sh@84 -- # waitforlisten 538432 /var/tmp/bperf.sock 00:25:09.622 00:09:39 -- common/autotest_common.sh@817 -- # '[' -z 538432 ']' 00:25:09.622 00:09:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.622 00:09:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:09.622 00:09:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.622 00:09:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:09.622 00:09:39 -- common/autotest_common.sh@10 -- # set +x 00:25:09.622 00:09:39 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:09.622 [2024-04-27 00:09:39.710102] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:09.622 [2024-04-27 00:09:39.710159] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid538432 ] 00:25:09.622 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:09.622 Zero copy mechanism will not be used. 00:25:09.622 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.622 [2024-04-27 00:09:39.768791] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.622 [2024-04-27 00:09:39.831922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.566 00:09:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:10.566 00:09:40 -- common/autotest_common.sh@850 -- # return 0 00:25:10.566 00:09:40 -- host/digest.sh@86 -- # false 00:25:10.566 00:09:40 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:10.566 00:09:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:10.566 00:09:40 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.566 00:09:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.827 nvme0n1 00:25:10.827 00:09:40 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:10.827 00:09:40 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:10.827 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:10.827 Zero copy mechanism will not be used. 00:25:10.827 Running I/O for 2 seconds... 00:25:13.375 00:25:13.375 Latency(us) 00:25:13.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.375 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:13.375 nvme0n1 : 2.00 4614.54 576.82 0.00 0.00 3461.80 1631.57 15947.09 00:25:13.375 =================================================================================================================== 00:25:13.375 Total : 4614.54 576.82 0.00 0.00 3461.80 1631.57 15947.09 00:25:13.375 0 00:25:13.375 00:09:43 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:13.375 00:09:43 -- host/digest.sh@93 -- # get_accel_stats 00:25:13.375 00:09:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:13.375 00:09:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:13.375 | select(.opcode=="crc32c") 00:25:13.375 | "\(.module_name) \(.executed)"' 00:25:13.375 00:09:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:13.375 00:09:43 -- host/digest.sh@94 -- # false 00:25:13.375 00:09:43 -- host/digest.sh@94 -- # exp_module=software 00:25:13.375 00:09:43 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:13.375 00:09:43 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:13.375 00:09:43 -- host/digest.sh@98 -- # killprocess 538432 00:25:13.375 00:09:43 -- common/autotest_common.sh@936 -- # '[' -z 538432 ']' 00:25:13.375 00:09:43 -- common/autotest_common.sh@940 -- # kill -0 538432 00:25:13.375 00:09:43 -- common/autotest_common.sh@941 -- # uname 00:25:13.375 00:09:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:13.375 00:09:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 538432 00:25:13.375 00:09:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:13.375 00:09:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:13.375 00:09:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 538432' 00:25:13.375 killing process with pid 538432 00:25:13.375 00:09:43 -- common/autotest_common.sh@955 -- # kill 538432 00:25:13.375 Received shutdown signal, test time was about 2.000000 seconds 00:25:13.375 00:25:13.375 Latency(us) 00:25:13.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.375 =================================================================================================================== 00:25:13.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.375 00:09:43 -- common/autotest_common.sh@960 -- # wait 538432 00:25:13.375 00:09:43 -- host/digest.sh@132 -- # killprocess 536132 00:25:13.375 00:09:43 -- common/autotest_common.sh@936 -- # '[' -z 536132 ']' 00:25:13.375 00:09:43 -- common/autotest_common.sh@940 -- # kill -0 536132 00:25:13.375 00:09:43 -- common/autotest_common.sh@941 -- # uname 00:25:13.375 00:09:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:13.375 00:09:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 536132 00:25:13.375 00:09:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:13.375 00:09:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:13.375 00:09:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 536132' 00:25:13.375 killing process with pid 536132 00:25:13.375 00:09:43 -- common/autotest_common.sh@955 -- # kill 536132 00:25:13.375 00:09:43 -- common/autotest_common.sh@960 -- # wait 536132 00:25:13.637 00:25:13.637 real 0m15.505s 00:25:13.637 user 0m30.406s 00:25:13.637 sys 0m3.082s 00:25:13.637 00:09:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:13.637 00:09:43 -- common/autotest_common.sh@10 -- # set +x 00:25:13.637 ************************************ 00:25:13.637 END TEST nvmf_digest_clean 00:25:13.637 ************************************ 00:25:13.637 00:09:43 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:13.637 00:09:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:13.637 00:09:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:13.637 00:09:43 -- common/autotest_common.sh@10 -- # set +x 00:25:13.637 ************************************ 00:25:13.637 START TEST nvmf_digest_error 00:25:13.637 ************************************ 00:25:13.637 00:09:43 -- common/autotest_common.sh@1111 -- # run_digest_error 00:25:13.637 00:09:43 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:13.637 00:09:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:13.637 00:09:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:13.637 00:09:43 -- common/autotest_common.sh@10 -- # set +x 00:25:13.637 00:09:43 -- nvmf/common.sh@470 -- # nvmfpid=539248 00:25:13.637 00:09:43 -- nvmf/common.sh@471 -- # waitforlisten 539248 00:25:13.637 00:09:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:13.637 00:09:43 -- common/autotest_common.sh@817 -- # '[' -z 539248 ']' 00:25:13.637 00:09:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.637 00:09:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:13.637 00:09:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.637 00:09:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:13.637 00:09:43 -- common/autotest_common.sh@10 -- # set +x 00:25:13.637 [2024-04-27 00:09:43.854234] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:13.637 [2024-04-27 00:09:43.854280] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.898 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.898 [2024-04-27 00:09:43.920022] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.898 [2024-04-27 00:09:43.985276] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.898 [2024-04-27 00:09:43.985312] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.898 [2024-04-27 00:09:43.985319] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.898 [2024-04-27 00:09:43.985326] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.898 [2024-04-27 00:09:43.985331] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.898 [2024-04-27 00:09:43.985349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.469 00:09:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:14.469 00:09:44 -- common/autotest_common.sh@850 -- # return 0 00:25:14.469 00:09:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:14.469 00:09:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:14.469 00:09:44 -- common/autotest_common.sh@10 -- # set +x 00:25:14.469 00:09:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.469 00:09:44 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:14.469 00:09:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.469 00:09:44 -- common/autotest_common.sh@10 -- # set +x 00:25:14.469 [2024-04-27 00:09:44.659262] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:14.469 00:09:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.469 00:09:44 -- host/digest.sh@105 -- # common_target_config 00:25:14.469 00:09:44 -- host/digest.sh@43 -- # rpc_cmd 00:25:14.469 00:09:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.469 00:09:44 -- common/autotest_common.sh@10 -- # set +x 00:25:14.730 null0 00:25:14.730 [2024-04-27 00:09:44.736241] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.730 [2024-04-27 00:09:44.760440] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.730 00:09:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.730 00:09:44 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:14.730 00:09:44 -- host/digest.sh@54 -- # local rw bs qd 00:25:14.730 00:09:44 -- host/digest.sh@56 -- # rw=randread 00:25:14.730 00:09:44 -- host/digest.sh@56 -- # bs=4096 00:25:14.730 00:09:44 -- host/digest.sh@56 -- # qd=128 00:25:14.730 00:09:44 -- host/digest.sh@58 -- # bperfpid=539425 00:25:14.730 00:09:44 -- host/digest.sh@60 -- # waitforlisten 539425 /var/tmp/bperf.sock 00:25:14.730 00:09:44 -- common/autotest_common.sh@817 -- # '[' -z 539425 ']' 00:25:14.730 00:09:44 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:14.730 00:09:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:14.730 00:09:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:14.730 00:09:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:14.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:14.730 00:09:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:14.730 00:09:44 -- common/autotest_common.sh@10 -- # set +x 00:25:14.730 [2024-04-27 00:09:44.813344] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:14.730 [2024-04-27 00:09:44.813391] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid539425 ] 00:25:14.730 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.730 [2024-04-27 00:09:44.872411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.730 [2024-04-27 00:09:44.936315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.673 00:09:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:15.673 00:09:45 -- common/autotest_common.sh@850 -- # return 0 00:25:15.673 00:09:45 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:15.674 00:09:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:15.674 00:09:45 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:15.674 00:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.674 00:09:45 -- common/autotest_common.sh@10 -- # set +x 00:25:15.674 00:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.674 00:09:45 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.674 00:09:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.935 nvme0n1 00:25:15.935 00:09:45 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:15.935 00:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.935 00:09:45 -- common/autotest_common.sh@10 -- # set +x 00:25:15.935 00:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.935 00:09:45 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:15.935 00:09:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:15.935 Running I/O for 2 seconds... 00:25:15.935 [2024-04-27 00:09:46.099354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:15.935 [2024-04-27 00:09:46.099392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.935 [2024-04-27 00:09:46.099403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.935 [2024-04-27 00:09:46.109694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:15.935 [2024-04-27 00:09:46.109717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.935 [2024-04-27 00:09:46.109726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.935 [2024-04-27 00:09:46.123491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:15.935 [2024-04-27 00:09:46.123513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.935 [2024-04-27 00:09:46.123522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.935 [2024-04-27 00:09:46.134319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:15.935 [2024-04-27 00:09:46.134340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.935 [2024-04-27 00:09:46.134349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.935 [2024-04-27 00:09:46.145852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:15.935 [2024-04-27 00:09:46.145873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.935 [2024-04-27 00:09:46.145883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.197 [2024-04-27 00:09:46.158177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.197 [2024-04-27 00:09:46.158199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.197 [2024-04-27 00:09:46.158208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.197 [2024-04-27 00:09:46.170114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.197 [2024-04-27 00:09:46.170135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.197 [2024-04-27 00:09:46.170143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.197 [2024-04-27 00:09:46.183495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.197 [2024-04-27 00:09:46.183516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.197 [2024-04-27 00:09:46.183525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.197 [2024-04-27 00:09:46.195792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.197 [2024-04-27 00:09:46.195813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.197 [2024-04-27 00:09:46.195822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.197 [2024-04-27 00:09:46.206509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.197 [2024-04-27 00:09:46.206529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.197 [2024-04-27 00:09:46.206538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.197 [2024-04-27 00:09:46.219787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.197 [2024-04-27 00:09:46.219807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.197 [2024-04-27 00:09:46.219820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.232168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.232189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.232198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.243302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.243322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.243331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.259678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.259700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.259708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.273473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.273493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.273502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.284025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.284046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.284055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.297490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.297511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.297520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.308866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.308887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.308896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.323910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.323931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.323940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.340133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.340158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.340167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.352908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.352929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.352938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.364889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.364910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.364918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.376163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.376184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.376193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.388315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.388336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.388345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.400923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.400944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.400952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.198 [2024-04-27 00:09:46.411784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.198 [2024-04-27 00:09:46.411804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.198 [2024-04-27 00:09:46.411812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.426871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.426892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.426901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.437499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.437519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.437528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.451843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.451864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.451874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.464182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.464203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.464212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.475397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.475417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.475425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.488469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.488489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.488498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.499986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.500006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.500015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.510768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.510789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.510797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.523423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.523444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.523452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.535339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.535360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.535368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.547704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.547725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.547737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.560031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.560052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.560060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.571104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.571125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.571133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.585529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.585550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.585558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.596645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.596666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.596675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.608950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.608970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.608979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.620367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.620388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.620396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.632461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.632481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.632490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.644181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.644202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.644210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.656200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.656224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.656233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.460 [2024-04-27 00:09:46.668866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.460 [2024-04-27 00:09:46.668887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.460 [2024-04-27 00:09:46.668895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.722 [2024-04-27 00:09:46.680710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.722 [2024-04-27 00:09:46.680731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.722 [2024-04-27 00:09:46.680739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.722 [2024-04-27 00:09:46.691228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.722 [2024-04-27 00:09:46.691249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.722 [2024-04-27 00:09:46.691257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.722 [2024-04-27 00:09:46.704680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.722 [2024-04-27 00:09:46.704700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.722 [2024-04-27 00:09:46.704709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.722 [2024-04-27 00:09:46.716960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.722 [2024-04-27 00:09:46.716981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.722 [2024-04-27 00:09:46.716989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.722 [2024-04-27 00:09:46.729313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.722 [2024-04-27 00:09:46.729334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.722 [2024-04-27 00:09:46.729343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.722 [2024-04-27 00:09:46.742701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.722 [2024-04-27 00:09:46.742721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.722 [2024-04-27 00:09:46.742729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.722 [2024-04-27 00:09:46.754668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.722 [2024-04-27 00:09:46.754687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.722 [2024-04-27 00:09:46.754696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.722 [2024-04-27 00:09:46.766772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.722 [2024-04-27 00:09:46.766793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.766802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.778520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.778540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.778549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.790308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.790329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.790337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.802569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.802589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.802597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.815341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.815362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.815371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.826338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.826361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.826370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.839691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.839712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.839720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.851470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.851490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.851498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.861915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.861942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.861951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.876748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.876769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.876777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.888112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.888132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.888141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.899767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.899788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.899796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.911250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.911269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.911278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.925009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.925030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.925038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.723 [2024-04-27 00:09:46.937388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.723 [2024-04-27 00:09:46.937408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.723 [2024-04-27 00:09:46.937416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:46.949131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:46.949152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:46.949161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:46.960490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:46.960511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:46.960519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:46.973551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:46.973571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:46.973579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:46.984942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:46.984962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:46.984971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:46.997554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:46.997574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:46.997582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.008315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.008335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.008344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.020345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.020365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.020374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.032764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.032785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.032793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.043424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.043445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.043453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.057609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.057629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.057638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.068258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.068278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.068290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.081328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.081348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.081356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.092952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.092973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.092981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.107102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.107123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.107131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.118206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.118227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.118235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.133350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.133370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.133379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.146208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.146228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.146236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.158412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.158431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.158440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.169436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.169456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.169465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.181593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.181617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.181626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.986 [2024-04-27 00:09:47.195167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:16.986 [2024-04-27 00:09:47.195189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.986 [2024-04-27 00:09:47.195197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.208208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.208229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.208237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.222180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.222201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.222209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.233459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.233479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.233488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.250111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.250132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.250141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.264843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.264864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.264873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.275984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.276004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.276013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.287131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.287152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.287160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.298422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.298443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.298451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.311617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.311637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.311645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.323255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.323277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.323285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.335303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.335323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.335332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.346867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.346888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.346896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.359668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.359688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.359696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.371193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.371213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.371221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.381937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.381957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.381966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.397866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.397887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.397899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.411313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.411333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.411342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.422780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.422800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.422808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.435651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.435671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.435680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.447030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.447050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.447058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.249 [2024-04-27 00:09:47.458610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.249 [2024-04-27 00:09:47.458631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.249 [2024-04-27 00:09:47.458639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.471365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.471386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.471395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.484596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.484616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.484625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.495399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.495419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.495428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.507859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.507879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.507887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.520237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.520257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.520266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.532222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.532242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.532250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.542768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.542788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.542796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.554521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.554540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.554549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.567270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.567290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.567299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.579441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.579461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.579469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.591263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.591283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.591292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.603611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.603631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.603643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.614755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.614775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.614783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.627209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.627229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.627237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.640994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.641014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.641023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.655395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.655415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.655423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.666045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.666065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.666073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.678558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.512 [2024-04-27 00:09:47.678578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.512 [2024-04-27 00:09:47.678587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.512 [2024-04-27 00:09:47.689835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.513 [2024-04-27 00:09:47.689859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.513 [2024-04-27 00:09:47.689868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.513 [2024-04-27 00:09:47.702939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.513 [2024-04-27 00:09:47.702959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.513 [2024-04-27 00:09:47.702968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.513 [2024-04-27 00:09:47.715012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.513 [2024-04-27 00:09:47.715036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.513 [2024-04-27 00:09:47.715044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.513 [2024-04-27 00:09:47.726805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.513 [2024-04-27 00:09:47.726825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.513 [2024-04-27 00:09:47.726833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.737371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.737391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.737400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.750878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.750899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.750907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.763403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.763423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.763431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.774298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.774318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.774326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.786453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.786475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.786484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.800868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.800889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.800897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.813239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.813259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.813268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.823464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.823485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.823493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.837771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.837792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.837801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.848449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.848470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.848478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.861615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.861635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.861644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.873738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.873758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.873767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.887244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.887264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.887272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.898896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.898916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.898925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.910426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.910446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.910454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.922566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.922586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.922598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.933979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.934000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.934008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.946638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.946658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.946667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.957996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.958015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.958026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.971460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.971480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.971488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.774 [2024-04-27 00:09:47.982255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:17.774 [2024-04-27 00:09:47.982275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.774 [2024-04-27 00:09:47.982284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.035 [2024-04-27 00:09:47.996653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:18.035 [2024-04-27 00:09:47.996673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.035 [2024-04-27 00:09:47.996682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.035 [2024-04-27 00:09:48.008607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:18.035 [2024-04-27 00:09:48.008627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.035 [2024-04-27 00:09:48.008635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.035 [2024-04-27 00:09:48.019928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:18.035 [2024-04-27 00:09:48.019948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.035 [2024-04-27 00:09:48.019956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.035 [2024-04-27 00:09:48.031427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:18.035 [2024-04-27 00:09:48.031448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.035 [2024-04-27 00:09:48.031456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.035 [2024-04-27 00:09:48.044004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:18.035 [2024-04-27 00:09:48.044025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.035 [2024-04-27 00:09:48.044034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.035 [2024-04-27 00:09:48.056516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:18.035 [2024-04-27 00:09:48.056536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.035 [2024-04-27 00:09:48.056544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.035 [2024-04-27 00:09:48.067631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:18.035 [2024-04-27 00:09:48.067652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.035 [2024-04-27 00:09:48.067660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.035 [2024-04-27 00:09:48.079628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e50f0) 00:25:18.036 [2024-04-27 00:09:48.079648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.036 [2024-04-27 00:09:48.079657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.036 00:25:18.036 Latency(us) 00:25:18.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.036 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:18.036 nvme0n1 : 2.00 20716.92 80.93 0.00 0.00 6171.52 3153.92 18240.85 00:25:18.036 =================================================================================================================== 00:25:18.036 Total : 20716.92 80.93 0.00 0.00 6171.52 3153.92 18240.85 00:25:18.036 0 00:25:18.036 00:09:48 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:18.036 00:09:48 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:18.036 00:09:48 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:18.036 | .driver_specific 00:25:18.036 | .nvme_error 00:25:18.036 | .status_code 00:25:18.036 | .command_transient_transport_error' 00:25:18.036 00:09:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:18.296 00:09:48 -- host/digest.sh@71 -- # (( 162 > 0 )) 00:25:18.296 00:09:48 -- host/digest.sh@73 -- # killprocess 539425 00:25:18.296 00:09:48 -- common/autotest_common.sh@936 -- # '[' -z 539425 ']' 00:25:18.296 00:09:48 -- common/autotest_common.sh@940 -- # kill -0 539425 00:25:18.296 00:09:48 -- common/autotest_common.sh@941 -- # uname 00:25:18.296 00:09:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:18.296 00:09:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 539425 00:25:18.296 00:09:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:18.296 00:09:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:18.296 00:09:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 539425' 00:25:18.296 killing process with pid 539425 00:25:18.296 00:09:48 -- common/autotest_common.sh@955 -- # kill 539425 00:25:18.296 Received shutdown signal, test time was about 2.000000 seconds 00:25:18.296 00:25:18.296 Latency(us) 00:25:18.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.296 =================================================================================================================== 00:25:18.296 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.296 00:09:48 -- common/autotest_common.sh@960 -- # wait 539425 00:25:18.296 00:09:48 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:18.296 00:09:48 -- host/digest.sh@54 -- # local rw bs qd 00:25:18.296 00:09:48 -- host/digest.sh@56 -- # rw=randread 00:25:18.296 00:09:48 -- host/digest.sh@56 -- # bs=131072 00:25:18.296 00:09:48 -- host/digest.sh@56 -- # qd=16 00:25:18.296 00:09:48 -- host/digest.sh@58 -- # bperfpid=540191 00:25:18.296 00:09:48 -- host/digest.sh@60 -- # waitforlisten 540191 /var/tmp/bperf.sock 00:25:18.296 00:09:48 -- common/autotest_common.sh@817 -- # '[' -z 540191 ']' 00:25:18.296 00:09:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:18.296 00:09:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:18.296 00:09:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:18.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:18.296 00:09:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:18.296 00:09:48 -- common/autotest_common.sh@10 -- # set +x 00:25:18.297 00:09:48 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:18.297 [2024-04-27 00:09:48.489536] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:18.297 [2024-04-27 00:09:48.489593] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid540191 ] 00:25:18.297 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:18.297 Zero copy mechanism will not be used. 00:25:18.297 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.557 [2024-04-27 00:09:48.548118] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.557 [2024-04-27 00:09:48.611131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.128 00:09:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:19.129 00:09:49 -- common/autotest_common.sh@850 -- # return 0 00:25:19.129 00:09:49 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:19.129 00:09:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:19.389 00:09:49 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:19.389 00:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.389 00:09:49 -- common/autotest_common.sh@10 -- # set +x 00:25:19.389 00:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.389 00:09:49 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.389 00:09:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.650 nvme0n1 00:25:19.650 00:09:49 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:19.650 00:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.650 00:09:49 -- common/autotest_common.sh@10 -- # set +x 00:25:19.650 00:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.650 00:09:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:19.650 00:09:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:19.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:19.650 Zero copy mechanism will not be used. 00:25:19.650 Running I/O for 2 seconds... 00:25:19.650 [2024-04-27 00:09:49.862113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.650 [2024-04-27 00:09:49.862153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.650 [2024-04-27 00:09:49.862164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.873889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.912 [2024-04-27 00:09:49.873917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.912 [2024-04-27 00:09:49.873927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.884205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.912 [2024-04-27 00:09:49.884229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.912 [2024-04-27 00:09:49.884238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.895520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.912 [2024-04-27 00:09:49.895543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.912 [2024-04-27 00:09:49.895552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.906121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.912 [2024-04-27 00:09:49.906143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.912 [2024-04-27 00:09:49.906152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.915820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.912 [2024-04-27 00:09:49.915847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.912 [2024-04-27 00:09:49.915856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.925727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.912 [2024-04-27 00:09:49.925749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.912 [2024-04-27 00:09:49.925757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.935818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.912 [2024-04-27 00:09:49.935845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.912 [2024-04-27 00:09:49.935854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.946414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.912 [2024-04-27 00:09:49.946440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.912 [2024-04-27 00:09:49.946449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.957165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.912 [2024-04-27 00:09:49.957187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.912 [2024-04-27 00:09:49.957195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.968016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.912 [2024-04-27 00:09:49.968038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.912 [2024-04-27 00:09:49.968047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.980013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.912 [2024-04-27 00:09:49.980034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.912 [2024-04-27 00:09:49.980043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.912 [2024-04-27 00:09:49.988763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:49.988786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:49.988795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:49.998919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:49.998942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:49.998950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.008917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.008940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.008949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.021092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.021115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.021124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.030575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.030598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.030607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.040473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.040495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.040504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.050203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.050226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.050234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.060537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.060560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.060569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.071044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.071067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.071075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.080993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.081016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.081025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.090860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.090883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.090891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.101074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.101098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.101107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.110477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.110500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.110509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.913 [2024-04-27 00:09:50.121972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:19.913 [2024-04-27 00:09:50.121995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.913 [2024-04-27 00:09:50.122008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.134113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.134136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.134144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.144711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.144733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.144741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.155218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.155241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.155249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.166847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.166869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.166878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.176622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.176644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.176652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.186885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.186908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.186916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.196627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.196649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.196658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.207222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.207244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.207253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.216073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.216100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.216109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.226313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.226336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.226344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.236238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.236260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.236269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.246428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.246450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.246459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.255818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.255845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.255854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.263540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.263563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.263571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.273035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.273057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.273066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.282255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.282278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.282287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.294031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.294053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.294061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.305156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.305177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.305186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.313631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.313653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.313661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.324515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.324538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.324547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.175 [2024-04-27 00:09:50.334245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.175 [2024-04-27 00:09:50.334267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.175 [2024-04-27 00:09:50.334275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.176 [2024-04-27 00:09:50.345635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.176 [2024-04-27 00:09:50.345657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.176 [2024-04-27 00:09:50.345665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.176 [2024-04-27 00:09:50.355125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.176 [2024-04-27 00:09:50.355147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.176 [2024-04-27 00:09:50.355155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.176 [2024-04-27 00:09:50.364894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.176 [2024-04-27 00:09:50.364916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.176 [2024-04-27 00:09:50.364924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.176 [2024-04-27 00:09:50.376153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.176 [2024-04-27 00:09:50.376176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.176 [2024-04-27 00:09:50.376184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.176 [2024-04-27 00:09:50.385411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.176 [2024-04-27 00:09:50.385432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.176 [2024-04-27 00:09:50.385444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.397117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.397140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.397149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.407925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.407948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.407956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.417529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.417550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.417559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.428740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.428762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.428770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.440055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.440077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.440085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.449183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.449205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.449214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.458506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.458528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.458536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.468453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.468476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.468484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.478439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.478465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.478473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.487491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.487513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.487521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.497880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.497902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.497911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.507701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.507723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.507732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.517784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.438 [2024-04-27 00:09:50.517806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.438 [2024-04-27 00:09:50.517814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.438 [2024-04-27 00:09:50.527605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.527628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.527637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.538014] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.538037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.538045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.547813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.547836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.547850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.557509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.557532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.557540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.567760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.567783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.567791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.576652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.576674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.576683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.586578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.586601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.586609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.598454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.598476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.598485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.609885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.609907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.609915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.619663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.619685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.619693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.630399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.630421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.630430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.641044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.641066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.641075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.439 [2024-04-27 00:09:50.651575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.439 [2024-04-27 00:09:50.651598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.439 [2024-04-27 00:09:50.651610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.661933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.701 [2024-04-27 00:09:50.661957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.701 [2024-04-27 00:09:50.661965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.671852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.701 [2024-04-27 00:09:50.671874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.701 [2024-04-27 00:09:50.671882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.682679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.701 [2024-04-27 00:09:50.682702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.701 [2024-04-27 00:09:50.682711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.692663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.701 [2024-04-27 00:09:50.692686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.701 [2024-04-27 00:09:50.692694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.703979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.701 [2024-04-27 00:09:50.704001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.701 [2024-04-27 00:09:50.704009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.714541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.701 [2024-04-27 00:09:50.714564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.701 [2024-04-27 00:09:50.714572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.724700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.701 [2024-04-27 00:09:50.724722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.701 [2024-04-27 00:09:50.724731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.734095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.701 [2024-04-27 00:09:50.734118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.701 [2024-04-27 00:09:50.734127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.743036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.701 [2024-04-27 00:09:50.743058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.701 [2024-04-27 00:09:50.743067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.751768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.701 [2024-04-27 00:09:50.751790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.701 [2024-04-27 00:09:50.751799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.760230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.701 [2024-04-27 00:09:50.760252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.701 [2024-04-27 00:09:50.760260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.701 [2024-04-27 00:09:50.770714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.770737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.770745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.780326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.780348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.780356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.790340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.790362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.790371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.799536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.799558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.799567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.809518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.809540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.809548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.819015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.819038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.819050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.828768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.828790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.828798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.838721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.838743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.838752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.848710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.848732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.848740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.857645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.857668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.857677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.867277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.867300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.867308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.876782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.876804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.876813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.887417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.887439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.887448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.897683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.897706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.897714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.907766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.907792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.907800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.702 [2024-04-27 00:09:50.920244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.702 [2024-04-27 00:09:50.920266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.702 [2024-04-27 00:09:50.920275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.963 [2024-04-27 00:09:50.933221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.963 [2024-04-27 00:09:50.933243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.963 [2024-04-27 00:09:50.933252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.963 [2024-04-27 00:09:50.946085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.963 [2024-04-27 00:09:50.946106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.963 [2024-04-27 00:09:50.946115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.963 [2024-04-27 00:09:50.956882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.963 [2024-04-27 00:09:50.956904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.963 [2024-04-27 00:09:50.956912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.963 [2024-04-27 00:09:50.966267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.963 [2024-04-27 00:09:50.966289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.963 [2024-04-27 00:09:50.966297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.963 [2024-04-27 00:09:50.974874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.963 [2024-04-27 00:09:50.974896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.963 [2024-04-27 00:09:50.974904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.963 [2024-04-27 00:09:50.984974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:50.984996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:50.985004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:50.994881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:50.994904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:50.994913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.003977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.004000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.004008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.013537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.013560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.013568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.024354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.024376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.024384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.034514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.034537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.034545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.044334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.044356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.044365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.054382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.054404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.054412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.065683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.065705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.065714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.077429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.077451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.077460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.089606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.089628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.089640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.102024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.102047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.102055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.111094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.111117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.111125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.120224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.120246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.120254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.129233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.129255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.129264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.138478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.138500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.138509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.147013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.147035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.147044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.157702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.157725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.157734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.168149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.168172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.168180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.964 [2024-04-27 00:09:51.178794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:20.964 [2024-04-27 00:09:51.178820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.964 [2024-04-27 00:09:51.178829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.188668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.188691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.226 [2024-04-27 00:09:51.188699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.198144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.198166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.226 [2024-04-27 00:09:51.198174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.207292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.207315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.226 [2024-04-27 00:09:51.207323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.218003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.218026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.226 [2024-04-27 00:09:51.218034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.227077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.227099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.226 [2024-04-27 00:09:51.227108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.237571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.237593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.226 [2024-04-27 00:09:51.237601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.247815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.247843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.226 [2024-04-27 00:09:51.247852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.260898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.260920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.226 [2024-04-27 00:09:51.260929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.273791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.273814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.226 [2024-04-27 00:09:51.273823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.286630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.286653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.226 [2024-04-27 00:09:51.286661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.299269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.299292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.226 [2024-04-27 00:09:51.299301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.226 [2024-04-27 00:09:51.309993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.226 [2024-04-27 00:09:51.310016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.310024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.323185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.323207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.323216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.335510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.335532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.335541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.344996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.345018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.345027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.354442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.354464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.354472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.364099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.364122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.364134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.373862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.373883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.373892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.383046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.383069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.383078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.393877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.393900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.393908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.402817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.402844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.402853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.414193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.414216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.414224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.424932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.424954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.424962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.435261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.435284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.435293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.227 [2024-04-27 00:09:51.444338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.227 [2024-04-27 00:09:51.444361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.227 [2024-04-27 00:09:51.444369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.452865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.452891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.452900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.466782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.466804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.466813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.481752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.481774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.481782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.495310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.495332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.495341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.509425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.509447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.509456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.523056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.523078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.523087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.537525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.537548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.537556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.551378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.551400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.551409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.564599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.564621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.564630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.576466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.576489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.576498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.586326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.586348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.586357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.596039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.596062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.596070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.606657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.606680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.606688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.616382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.616404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.616412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.627135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.627158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.627167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.637480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.637502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.637510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.647946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.647968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.647977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.659533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.659556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.659568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.670973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.670995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.671005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.680600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.680623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.680631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.489 [2024-04-27 00:09:51.689711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.489 [2024-04-27 00:09:51.689733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.489 [2024-04-27 00:09:51.689741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.490 [2024-04-27 00:09:51.700637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.490 [2024-04-27 00:09:51.700660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.490 [2024-04-27 00:09:51.700669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.750 [2024-04-27 00:09:51.710949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.750 [2024-04-27 00:09:51.710972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.750 [2024-04-27 00:09:51.710981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.750 [2024-04-27 00:09:51.722574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.750 [2024-04-27 00:09:51.722595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.750 [2024-04-27 00:09:51.722604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.750 [2024-04-27 00:09:51.732125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.750 [2024-04-27 00:09:51.732146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.750 [2024-04-27 00:09:51.732155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.750 [2024-04-27 00:09:51.741662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.750 [2024-04-27 00:09:51.741685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.750 [2024-04-27 00:09:51.741693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.751 [2024-04-27 00:09:51.752665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.751 [2024-04-27 00:09:51.752691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.751 [2024-04-27 00:09:51.752699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.751 [2024-04-27 00:09:51.762747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.751 [2024-04-27 00:09:51.762769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.751 [2024-04-27 00:09:51.762777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.751 [2024-04-27 00:09:51.773052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.751 [2024-04-27 00:09:51.773074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.751 [2024-04-27 00:09:51.773083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.751 [2024-04-27 00:09:51.783390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.751 [2024-04-27 00:09:51.783413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.751 [2024-04-27 00:09:51.783422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.751 [2024-04-27 00:09:51.792065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.751 [2024-04-27 00:09:51.792088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.751 [2024-04-27 00:09:51.792096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.751 [2024-04-27 00:09:51.802688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.751 [2024-04-27 00:09:51.802710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.751 [2024-04-27 00:09:51.802719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.751 [2024-04-27 00:09:51.813748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.751 [2024-04-27 00:09:51.813770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.751 [2024-04-27 00:09:51.813778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.751 [2024-04-27 00:09:51.823297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.751 [2024-04-27 00:09:51.823320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.751 [2024-04-27 00:09:51.823328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.751 [2024-04-27 00:09:51.832277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.751 [2024-04-27 00:09:51.832299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.751 [2024-04-27 00:09:51.832307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.751 [2024-04-27 00:09:51.842087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.751 [2024-04-27 00:09:51.842110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.751 [2024-04-27 00:09:51.842118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.751 [2024-04-27 00:09:51.852036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8e83e0) 00:25:21.751 [2024-04-27 00:09:51.852059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.751 [2024-04-27 00:09:51.852067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.751 00:25:21.751 Latency(us) 00:25:21.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.751 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:21.751 nvme0n1 : 2.00 2976.07 372.01 0.00 0.00 5371.40 1378.99 14527.15 00:25:21.751 =================================================================================================================== 00:25:21.751 Total : 2976.07 372.01 0.00 0.00 5371.40 1378.99 14527.15 00:25:21.751 0 00:25:21.751 00:09:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:21.751 00:09:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:21.751 00:09:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:21.751 | .driver_specific 00:25:21.751 | .nvme_error 00:25:21.751 | .status_code 00:25:21.751 | .command_transient_transport_error' 00:25:21.751 00:09:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:22.011 00:09:52 -- host/digest.sh@71 -- # (( 192 > 0 )) 00:25:22.011 00:09:52 -- host/digest.sh@73 -- # killprocess 540191 00:25:22.011 00:09:52 -- common/autotest_common.sh@936 -- # '[' -z 540191 ']' 00:25:22.011 00:09:52 -- common/autotest_common.sh@940 -- # kill -0 540191 00:25:22.011 00:09:52 -- common/autotest_common.sh@941 -- # uname 00:25:22.011 00:09:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:22.011 00:09:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 540191 00:25:22.011 00:09:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:22.011 00:09:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:22.011 00:09:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 540191' 00:25:22.011 killing process with pid 540191 00:25:22.011 00:09:52 -- common/autotest_common.sh@955 -- # kill 540191 00:25:22.011 Received shutdown signal, test time was about 2.000000 seconds 00:25:22.011 00:25:22.011 Latency(us) 00:25:22.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.011 =================================================================================================================== 00:25:22.011 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:22.011 00:09:52 -- common/autotest_common.sh@960 -- # wait 540191 00:25:22.011 00:09:52 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:22.011 00:09:52 -- host/digest.sh@54 -- # local rw bs qd 00:25:22.011 00:09:52 -- host/digest.sh@56 -- # rw=randwrite 00:25:22.011 00:09:52 -- host/digest.sh@56 -- # bs=4096 00:25:22.011 00:09:52 -- host/digest.sh@56 -- # qd=128 00:25:22.011 00:09:52 -- host/digest.sh@58 -- # bperfpid=540962 00:25:22.011 00:09:52 -- host/digest.sh@60 -- # waitforlisten 540962 /var/tmp/bperf.sock 00:25:22.011 00:09:52 -- common/autotest_common.sh@817 -- # '[' -z 540962 ']' 00:25:22.011 00:09:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:22.011 00:09:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:22.011 00:09:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:22.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:22.011 00:09:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:22.011 00:09:52 -- common/autotest_common.sh@10 -- # set +x 00:25:22.011 00:09:52 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:22.271 [2024-04-27 00:09:52.270263] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:22.271 [2024-04-27 00:09:52.270320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid540962 ] 00:25:22.271 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.271 [2024-04-27 00:09:52.328657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.271 [2024-04-27 00:09:52.391488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.839 00:09:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:22.839 00:09:53 -- common/autotest_common.sh@850 -- # return 0 00:25:22.839 00:09:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.839 00:09:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:23.098 00:09:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:23.098 00:09:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.098 00:09:53 -- common/autotest_common.sh@10 -- # set +x 00:25:23.098 00:09:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.098 00:09:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.098 00:09:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.358 nvme0n1 00:25:23.358 00:09:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:23.358 00:09:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.358 00:09:53 -- common/autotest_common.sh@10 -- # set +x 00:25:23.358 00:09:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.358 00:09:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:23.358 00:09:53 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:23.618 Running I/O for 2 seconds... 00:25:23.618 [2024-04-27 00:09:53.657523] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.618 [2024-04-27 00:09:53.657717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.618 [2024-04-27 00:09:53.657747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.618 [2024-04-27 00:09:53.669480] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.618 [2024-04-27 00:09:53.669796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.618 [2024-04-27 00:09:53.669818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.618 [2024-04-27 00:09:53.681413] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.618 [2024-04-27 00:09:53.681732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.618 [2024-04-27 00:09:53.681753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.618 [2024-04-27 00:09:53.693321] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.618 [2024-04-27 00:09:53.693689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.618 [2024-04-27 00:09:53.693710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.618 [2024-04-27 00:09:53.705262] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.618 [2024-04-27 00:09:53.705592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.618 [2024-04-27 00:09:53.705612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.618 [2024-04-27 00:09:53.717191] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.618 [2024-04-27 00:09:53.717504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.618 [2024-04-27 00:09:53.717523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.618 [2024-04-27 00:09:53.729128] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.618 [2024-04-27 00:09:53.729453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.618 [2024-04-27 00:09:53.729473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.618 [2024-04-27 00:09:53.740998] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.618 [2024-04-27 00:09:53.741286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.618 [2024-04-27 00:09:53.741305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.618 [2024-04-27 00:09:53.752913] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.618 [2024-04-27 00:09:53.753230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.618 [2024-04-27 00:09:53.753249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.618 [2024-04-27 00:09:53.764796] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.618 [2024-04-27 00:09:53.765101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.618 [2024-04-27 00:09:53.765120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.618 [2024-04-27 00:09:53.776730] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.618 [2024-04-27 00:09:53.777103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.618 [2024-04-27 00:09:53.777122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.619 [2024-04-27 00:09:53.788589] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.619 [2024-04-27 00:09:53.788887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.619 [2024-04-27 00:09:53.788910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.619 [2024-04-27 00:09:53.800460] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.619 [2024-04-27 00:09:53.800767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.619 [2024-04-27 00:09:53.800786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.619 [2024-04-27 00:09:53.812359] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.619 [2024-04-27 00:09:53.812680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.619 [2024-04-27 00:09:53.812699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.619 [2024-04-27 00:09:53.824250] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.619 [2024-04-27 00:09:53.824589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.619 [2024-04-27 00:09:53.824608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.619 [2024-04-27 00:09:53.836091] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.619 [2024-04-27 00:09:53.836410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.619 [2024-04-27 00:09:53.836429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.879 [2024-04-27 00:09:53.847978] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.879 [2024-04-27 00:09:53.848270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.848289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.859891] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.860206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.860225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.871913] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.872223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.872243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.883790] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.884129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.884148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.895662] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.895969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.895989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.907561] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.907874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.907893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.919409] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.919689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.919708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.931231] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.931513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.931531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.943179] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.943460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.943479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.955026] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.955388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.955407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.966862] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.967168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.967187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.978759] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.979041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.979060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:53.990618] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:53.990880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:53.990899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:54.002483] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:54.002660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:54.002680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:54.014333] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:54.014645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:54.014664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:54.026205] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:54.026513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:54.026532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:54.038025] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:54.038326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:54.038345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:54.049965] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:54.050266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:54.050285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:54.061877] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:54.062188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:54.062207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:54.073727] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:54.074039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:54.074058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:54.085588] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:54.085910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:54.085929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.880 [2024-04-27 00:09:54.097473] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:23.880 [2024-04-27 00:09:54.097789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.880 [2024-04-27 00:09:54.097811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.109365] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.109672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.109691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.121263] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.121599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.121617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.133090] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.133416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.133435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.144990] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.145318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.145337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.156893] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.157177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.157196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.168797] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.169110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.169129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.180629] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.180941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.180960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.192497] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.192802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.192821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.204587] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.204905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.204924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.216442] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.216753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.216772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.228329] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.228624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.228642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.240224] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.141 [2024-04-27 00:09:54.240537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.141 [2024-04-27 00:09:54.240557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.141 [2024-04-27 00:09:54.252047] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.142 [2024-04-27 00:09:54.252357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.142 [2024-04-27 00:09:54.252376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.142 [2024-04-27 00:09:54.263941] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.142 [2024-04-27 00:09:54.264204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.142 [2024-04-27 00:09:54.264223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.142 [2024-04-27 00:09:54.275747] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.142 [2024-04-27 00:09:54.276049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.142 [2024-04-27 00:09:54.276069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.142 [2024-04-27 00:09:54.287630] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.142 [2024-04-27 00:09:54.287941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.142 [2024-04-27 00:09:54.287960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.142 [2024-04-27 00:09:54.299478] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.142 [2024-04-27 00:09:54.299785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.142 [2024-04-27 00:09:54.299804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.142 [2024-04-27 00:09:54.311375] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.142 [2024-04-27 00:09:54.311545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.142 [2024-04-27 00:09:54.311563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.142 [2024-04-27 00:09:54.323229] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.142 [2024-04-27 00:09:54.323404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.142 [2024-04-27 00:09:54.323423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.142 [2024-04-27 00:09:54.335114] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.142 [2024-04-27 00:09:54.335422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.142 [2024-04-27 00:09:54.335441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.142 [2024-04-27 00:09:54.346953] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.142 [2024-04-27 00:09:54.347255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.142 [2024-04-27 00:09:54.347274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.142 [2024-04-27 00:09:54.358791] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.142 [2024-04-27 00:09:54.359192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.142 [2024-04-27 00:09:54.359211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.402 [2024-04-27 00:09:54.370659] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.370841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.370860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.382547] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.382848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.382868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.394419] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.394726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.394746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.406298] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.406582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.406605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.418193] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.418500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.418519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.430059] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.430343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.430362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.441884] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.442208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.442227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.453732] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.454040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.454058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.465612] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.465928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.465947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.477443] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.477740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.477759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.489310] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.489619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.489637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.501175] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.501476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.501495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.513017] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.513336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.513355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.524872] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.525194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.525213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.536786] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.537123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.537143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.548673] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.548975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.548994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.560527] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.560826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.560849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.572352] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.572647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.572666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.584164] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.584427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.584447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.596022] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.596195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.596214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.607876] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.608194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.608214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.403 [2024-04-27 00:09:54.619747] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.403 [2024-04-27 00:09:54.620038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.403 [2024-04-27 00:09:54.620057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.664 [2024-04-27 00:09:54.631656] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.664 [2024-04-27 00:09:54.631987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.664 [2024-04-27 00:09:54.632006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.664 [2024-04-27 00:09:54.643511] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.664 [2024-04-27 00:09:54.643831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.664 [2024-04-27 00:09:54.643854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.664 [2024-04-27 00:09:54.655375] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.664 [2024-04-27 00:09:54.655674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.664 [2024-04-27 00:09:54.655692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.664 [2024-04-27 00:09:54.667234] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.664 [2024-04-27 00:09:54.667537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.664 [2024-04-27 00:09:54.667555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.664 [2024-04-27 00:09:54.679102] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.664 [2024-04-27 00:09:54.679393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.664 [2024-04-27 00:09:54.679412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.664 [2024-04-27 00:09:54.690974] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.664 [2024-04-27 00:09:54.691283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.664 [2024-04-27 00:09:54.691302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.664 [2024-04-27 00:09:54.702795] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.664 [2024-04-27 00:09:54.703000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.664 [2024-04-27 00:09:54.703020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.664 [2024-04-27 00:09:54.714644] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.664 [2024-04-27 00:09:54.714822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.714848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.726514] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.726813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.726832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.738388] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.738702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.738721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.750276] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.750581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.750600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.762131] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.762431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.762449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.774002] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.774296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.774314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.785861] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.786181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.786200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.797689] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.798005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.798024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.809489] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.809799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.809819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.821360] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.821676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.821695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.833238] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.833522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.833541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.845089] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.845389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.845408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.856947] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.857263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.857282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.868795] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.869180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.869199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.665 [2024-04-27 00:09:54.880733] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.665 [2024-04-27 00:09:54.881037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.665 [2024-04-27 00:09:54.881057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:54.892596] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:54.892895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:54.892914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:54.904500] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:54.904808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:54.904826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:54.916414] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:54.916736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:54.916755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:54.928264] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:54.928589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:54.928608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:54.940188] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:54.940490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:54.940509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:54.952054] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:54.952359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:54.952378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:54.963925] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:54.964206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:54.964224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:54.975787] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:54.975978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:54.975997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:54.987691] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:54.987933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:54.987952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:54.999508] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:54.999782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:54.999801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:55.011402] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:55.011718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:55.011736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:55.023269] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:55.023585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:55.023607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:55.035102] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:55.035404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:55.035422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:55.046949] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:55.047250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:55.047269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:55.058822] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:55.059109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:55.059127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:55.070726] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:55.071029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.927 [2024-04-27 00:09:55.071048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.927 [2024-04-27 00:09:55.082605] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.927 [2024-04-27 00:09:55.082909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.928 [2024-04-27 00:09:55.082928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.928 [2024-04-27 00:09:55.094486] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.928 [2024-04-27 00:09:55.094757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.928 [2024-04-27 00:09:55.094776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.928 [2024-04-27 00:09:55.106364] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.928 [2024-04-27 00:09:55.106665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.928 [2024-04-27 00:09:55.106683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.928 [2024-04-27 00:09:55.118289] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.928 [2024-04-27 00:09:55.118607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.928 [2024-04-27 00:09:55.118626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.928 [2024-04-27 00:09:55.130178] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.928 [2024-04-27 00:09:55.130502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.928 [2024-04-27 00:09:55.130521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.928 [2024-04-27 00:09:55.142063] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:24.928 [2024-04-27 00:09:55.142336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.928 [2024-04-27 00:09:55.142355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.153915] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.154220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.154238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.165757] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.166073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.166091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.177630] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.177897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.177916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.189471] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.189778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.189796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.201572] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.201876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.201895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.213419] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.213734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.213752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.225250] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.225559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.225578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.237163] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.237469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.237489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.249026] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.249197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.249216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.260915] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.261223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.261242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.272781] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.273103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.273122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.284607] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.284781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.284801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.296454] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.296635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.296654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.308308] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.308603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.308622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.320149] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.189 [2024-04-27 00:09:55.320438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.189 [2024-04-27 00:09:55.320457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.189 [2024-04-27 00:09:55.332086] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.190 [2024-04-27 00:09:55.332402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.190 [2024-04-27 00:09:55.332424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.190 [2024-04-27 00:09:55.343983] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.190 [2024-04-27 00:09:55.344288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.190 [2024-04-27 00:09:55.344307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.190 [2024-04-27 00:09:55.355830] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.190 [2024-04-27 00:09:55.356147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.190 [2024-04-27 00:09:55.356166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.190 [2024-04-27 00:09:55.367673] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.190 [2024-04-27 00:09:55.367851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.190 [2024-04-27 00:09:55.367870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.190 [2024-04-27 00:09:55.379545] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.190 [2024-04-27 00:09:55.379811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.190 [2024-04-27 00:09:55.379831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.190 [2024-04-27 00:09:55.391386] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.190 [2024-04-27 00:09:55.391688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.190 [2024-04-27 00:09:55.391707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.190 [2024-04-27 00:09:55.403257] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.190 [2024-04-27 00:09:55.403563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.190 [2024-04-27 00:09:55.403583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.415116] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.415416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.415435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.426973] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.427261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.427279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.438835] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.439134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.439153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.450702] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.450992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.451011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.462572] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.462888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.462907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.474527] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.474826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.474857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.486315] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.486616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.486635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.498198] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.498498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.498517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.510064] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.510363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.510382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.521959] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.522261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.522280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.533828] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.534117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.534136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.545696] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.546004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.546024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.557574] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.557846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.557865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.569443] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.569737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.569756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.581322] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.581593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.581612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.593159] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.593476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.593495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.605033] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.605344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.605363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.616863] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.617169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.617188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.628725] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.629037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.629056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.640644] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.640929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.640952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 [2024-04-27 00:09:55.652507] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365920) with pdu=0x2000190f9b30 00:25:25.452 [2024-04-27 00:09:55.652792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.452 [2024-04-27 00:09:55.652811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:25.452 00:25:25.452 Latency(us) 00:25:25.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.452 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:25.452 nvme0n1 : 2.01 21490.48 83.95 0.00 0.00 5943.52 3167.57 12124.16 00:25:25.452 =================================================================================================================== 00:25:25.452 Total : 21490.48 83.95 0.00 0.00 5943.52 3167.57 12124.16 00:25:25.452 0 00:25:25.453 00:09:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:25.453 00:09:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:25.453 00:09:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:25.453 00:09:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:25.453 | .driver_specific 00:25:25.453 | .nvme_error 00:25:25.453 | .status_code 00:25:25.453 | .command_transient_transport_error' 00:25:25.715 00:09:55 -- host/digest.sh@71 -- # (( 169 > 0 )) 00:25:25.715 00:09:55 -- host/digest.sh@73 -- # killprocess 540962 00:25:25.715 00:09:55 -- common/autotest_common.sh@936 -- # '[' -z 540962 ']' 00:25:25.715 00:09:55 -- common/autotest_common.sh@940 -- # kill -0 540962 00:25:25.715 00:09:55 -- common/autotest_common.sh@941 -- # uname 00:25:25.715 00:09:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:25.715 00:09:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 540962 00:25:25.715 00:09:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:25.715 00:09:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:25.715 00:09:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 540962' 00:25:25.715 killing process with pid 540962 00:25:25.715 00:09:55 -- common/autotest_common.sh@955 -- # kill 540962 00:25:25.715 Received shutdown signal, test time was about 2.000000 seconds 00:25:25.715 00:25:25.715 Latency(us) 00:25:25.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.715 =================================================================================================================== 00:25:25.715 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.715 00:09:55 -- common/autotest_common.sh@960 -- # wait 540962 00:25:25.977 00:09:56 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:25.977 00:09:56 -- host/digest.sh@54 -- # local rw bs qd 00:25:25.977 00:09:56 -- host/digest.sh@56 -- # rw=randwrite 00:25:25.977 00:09:56 -- host/digest.sh@56 -- # bs=131072 00:25:25.977 00:09:56 -- host/digest.sh@56 -- # qd=16 00:25:25.977 00:09:56 -- host/digest.sh@58 -- # bperfpid=541649 00:25:25.977 00:09:56 -- host/digest.sh@60 -- # waitforlisten 541649 /var/tmp/bperf.sock 00:25:25.977 00:09:56 -- common/autotest_common.sh@817 -- # '[' -z 541649 ']' 00:25:25.977 00:09:56 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:25.977 00:09:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:25.977 00:09:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:25.977 00:09:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:25.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:25.977 00:09:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:25.977 00:09:56 -- common/autotest_common.sh@10 -- # set +x 00:25:25.977 [2024-04-27 00:09:56.069737] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:25.977 [2024-04-27 00:09:56.069789] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid541649 ] 00:25:25.977 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:25.977 Zero copy mechanism will not be used. 00:25:25.977 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.977 [2024-04-27 00:09:56.129614] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.977 [2024-04-27 00:09:56.191564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.920 00:09:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:26.920 00:09:56 -- common/autotest_common.sh@850 -- # return 0 00:25:26.920 00:09:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:26.920 00:09:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:26.920 00:09:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:26.920 00:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.920 00:09:56 -- common/autotest_common.sh@10 -- # set +x 00:25:26.920 00:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.920 00:09:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.920 00:09:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:27.180 nvme0n1 00:25:27.180 00:09:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:27.180 00:09:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.180 00:09:57 -- common/autotest_common.sh@10 -- # set +x 00:25:27.180 00:09:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.180 00:09:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:27.180 00:09:57 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:27.446 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:27.446 Zero copy mechanism will not be used. 00:25:27.446 Running I/O for 2 seconds... 00:25:27.446 [2024-04-27 00:09:57.451190] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.446 [2024-04-27 00:09:57.451609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.446 [2024-04-27 00:09:57.451644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.446 [2024-04-27 00:09:57.464068] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.446 [2024-04-27 00:09:57.464474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.446 [2024-04-27 00:09:57.464499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.446 [2024-04-27 00:09:57.477507] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.446 [2024-04-27 00:09:57.477918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.446 [2024-04-27 00:09:57.477941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.446 [2024-04-27 00:09:57.490565] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.446 [2024-04-27 00:09:57.490965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.446 [2024-04-27 00:09:57.490986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.446 [2024-04-27 00:09:57.503253] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.446 [2024-04-27 00:09:57.503687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.446 [2024-04-27 00:09:57.503708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.446 [2024-04-27 00:09:57.515213] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.446 [2024-04-27 00:09:57.515614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.515635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.528405] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.528758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.528779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.541214] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.541571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.541591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.553413] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.553829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.553855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.565791] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.566209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.566230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.578393] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.578808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.578830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.591597] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.591912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.591932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.605523] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.605658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.605677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.614830] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.615192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.615212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.622938] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.623206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.623227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.631733] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.632037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.632058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.640425] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.640854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.640875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.649272] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.649657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.649677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.447 [2024-04-27 00:09:57.658547] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.447 [2024-04-27 00:09:57.658922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.447 [2024-04-27 00:09:57.658942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.669069] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.669550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.669570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.679041] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.679438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.679462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.689056] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.689405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.689426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.697886] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.698159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.698180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.706284] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.706722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.706742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.714767] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.715160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.715181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.724951] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.725223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.725243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.736568] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.737091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.737112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.748075] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.748538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.748558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.758336] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.758693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.758714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.769910] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.770299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.770319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.779957] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.780312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.780332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.789569] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.789822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.789848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.799044] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.799516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.799537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.809881] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.810438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.810458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.820725] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.821228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.821249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.831995] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.832394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.832414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.843902] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.844268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.844288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.854868] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.855045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.855064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.866430] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.866845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.866865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.877921] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.878360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.792 [2024-04-27 00:09:57.878381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.792 [2024-04-27 00:09:57.887319] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.792 [2024-04-27 00:09:57.887703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.793 [2024-04-27 00:09:57.887723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.793 [2024-04-27 00:09:57.899461] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.793 [2024-04-27 00:09:57.899750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.793 [2024-04-27 00:09:57.899770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.793 [2024-04-27 00:09:57.911043] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.793 [2024-04-27 00:09:57.911567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.793 [2024-04-27 00:09:57.911587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.793 [2024-04-27 00:09:57.923863] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.793 [2024-04-27 00:09:57.924358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.793 [2024-04-27 00:09:57.924378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.793 [2024-04-27 00:09:57.935739] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.793 [2024-04-27 00:09:57.936091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.793 [2024-04-27 00:09:57.936111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.793 [2024-04-27 00:09:57.947075] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.793 [2024-04-27 00:09:57.947456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.793 [2024-04-27 00:09:57.947476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.793 [2024-04-27 00:09:57.959894] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.793 [2024-04-27 00:09:57.960347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.793 [2024-04-27 00:09:57.960370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.793 [2024-04-27 00:09:57.971124] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.793 [2024-04-27 00:09:57.971644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.793 [2024-04-27 00:09:57.971666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.793 [2024-04-27 00:09:57.982927] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.793 [2024-04-27 00:09:57.983391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.793 [2024-04-27 00:09:57.983411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.793 [2024-04-27 00:09:57.996147] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:27.793 [2024-04-27 00:09:57.996570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.793 [2024-04-27 00:09:57.996591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.008031] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.008352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.008372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.019347] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.019747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.019767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.031181] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.031499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.031519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.043025] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.043320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.043340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.053220] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.053442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.053462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.065580] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.065937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.065959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.077277] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.077607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.077628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.088205] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.088603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.088623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.098699] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.099046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.099067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.108535] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.108918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.108939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.116569] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.116937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.116957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.125780] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.058 [2024-04-27 00:09:58.126009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.058 [2024-04-27 00:09:58.126030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.058 [2024-04-27 00:09:58.133302] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.133682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.133702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.142223] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.142705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.142725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.152179] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.152566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.152587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.162384] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.162793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.162813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.171478] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.171702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.171722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.179825] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.180386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.180407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.190145] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.190501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.190521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.199762] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.200129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.200151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.209547] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.209978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.209998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.220020] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.220363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.220384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.229564] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.229787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.229811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.239409] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.239759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.239779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.249533] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.250060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.250082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.257863] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.258098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.258119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.264946] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.265231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.265251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.059 [2024-04-27 00:09:58.270969] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.059 [2024-04-27 00:09:58.271183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.059 [2024-04-27 00:09:58.271203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.279058] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.279426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.279446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.285903] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.286118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.286138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.290236] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.290450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.290470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.294599] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.294852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.294872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.299459] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.299669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.299690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.303637] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.303851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.303872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.307767] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.307981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.308002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.313309] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.313627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.313648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.317662] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.317877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.317897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.322809] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.323025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.323045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.326891] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.327100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.327120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.331878] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.332087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.332110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.336052] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.336260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.336280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.341571] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.341808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.341828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.346347] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.322 [2024-04-27 00:09:58.346557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.322 [2024-04-27 00:09:58.346578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.322 [2024-04-27 00:09:58.352279] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.352487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.352507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.358024] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.358234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.358255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.368157] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.368391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.368411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.378699] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.378992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.379012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.390486] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.391009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.391030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.402264] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.402662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.402683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.414604] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.415043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.415064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.426296] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.426550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.426570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.436511] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.436954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.436974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.447332] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.447657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.447677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.457895] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.458449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.458471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.467750] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.467992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.468013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.478292] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.478524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.478545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.488889] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.489298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.489318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.498211] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.498574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.498595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.505288] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.505570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.505590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.513521] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.513830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.513856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.519296] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.519627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.519647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.524887] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.525240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.525260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.531777] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.532119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.532140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.323 [2024-04-27 00:09:58.540163] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.323 [2024-04-27 00:09:58.540400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.323 [2024-04-27 00:09:58.540420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.586 [2024-04-27 00:09:58.547287] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.586 [2024-04-27 00:09:58.547499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.586 [2024-04-27 00:09:58.547520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.586 [2024-04-27 00:09:58.552576] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.586 [2024-04-27 00:09:58.552789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.586 [2024-04-27 00:09:58.552813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.586 [2024-04-27 00:09:58.560454] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.586 [2024-04-27 00:09:58.560669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.586 [2024-04-27 00:09:58.560690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.568587] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.569067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.569089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.577950] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.578173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.578194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.584002] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.584216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.584237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.591255] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.591651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.591671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.599110] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.599325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.599346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.604290] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.604513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.604534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.610427] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.610713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.610733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.617871] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.618181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.618202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.627636] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.627992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.628013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.637927] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.638177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.638197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.648873] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.649326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.649347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.659347] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.659599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.659619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.670805] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.671223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.671244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.681331] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.681554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.681574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.689549] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.689943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.689963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.695289] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.695664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.695685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.701303] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.701522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.701542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.706460] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.706675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.706695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.713140] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.713579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.713599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.721814] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.722067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.722087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.729732] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.730085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.730105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.738038] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.738363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.738383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.746255] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.746484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.746505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.587 [2024-04-27 00:09:58.755790] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.587 [2024-04-27 00:09:58.756113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.587 [2024-04-27 00:09:58.756132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.588 [2024-04-27 00:09:58.764895] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.588 [2024-04-27 00:09:58.765055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.588 [2024-04-27 00:09:58.765077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.588 [2024-04-27 00:09:58.775050] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.588 [2024-04-27 00:09:58.775403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.588 [2024-04-27 00:09:58.775423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.588 [2024-04-27 00:09:58.784646] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.588 [2024-04-27 00:09:58.784799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.588 [2024-04-27 00:09:58.784818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.588 [2024-04-27 00:09:58.793466] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.588 [2024-04-27 00:09:58.793689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.588 [2024-04-27 00:09:58.793708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.588 [2024-04-27 00:09:58.801882] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.588 [2024-04-27 00:09:58.802064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.588 [2024-04-27 00:09:58.802084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.850 [2024-04-27 00:09:58.811116] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.850 [2024-04-27 00:09:58.811418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.850 [2024-04-27 00:09:58.811438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.850 [2024-04-27 00:09:58.820956] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.850 [2024-04-27 00:09:58.821109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.850 [2024-04-27 00:09:58.821128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.850 [2024-04-27 00:09:58.829830] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.850 [2024-04-27 00:09:58.830281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.850 [2024-04-27 00:09:58.830301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.850 [2024-04-27 00:09:58.837364] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.850 [2024-04-27 00:09:58.837564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.850 [2024-04-27 00:09:58.837583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.850 [2024-04-27 00:09:58.845193] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.850 [2024-04-27 00:09:58.845412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.850 [2024-04-27 00:09:58.845431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.850 [2024-04-27 00:09:58.853439] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.850 [2024-04-27 00:09:58.853732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.850 [2024-04-27 00:09:58.853751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.850 [2024-04-27 00:09:58.863113] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.850 [2024-04-27 00:09:58.863571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.850 [2024-04-27 00:09:58.863592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.850 [2024-04-27 00:09:58.871126] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.850 [2024-04-27 00:09:58.871454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.871474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.880285] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.880506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.880526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.888099] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.888414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.888434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.897048] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.897323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.897343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.906652] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.906956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.906975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.915651] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.915811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.915834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.924199] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.924561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.924580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.933670] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.934002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.934022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.943203] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.943419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.943440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.952550] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.952733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.952752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.962378] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.962722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.962741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.971547] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.971763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.971784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.980934] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.981116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.981136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.990725] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.990915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.990935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:58.999004] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:58.999289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:58.999309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:59.008190] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:59.008510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:59.008530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:59.018069] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:59.018353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:59.018373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:59.027804] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:59.027967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:59.027987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:59.033896] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:59.034209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:59.034230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:59.038971] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:59.039123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:59.039141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:59.044047] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:59.044265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:59.044285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:59.047921] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:59.048065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:59.048084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.851 [2024-04-27 00:09:59.051554] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.851 [2024-04-27 00:09:59.051698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.851 [2024-04-27 00:09:59.051717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.852 [2024-04-27 00:09:59.055533] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.852 [2024-04-27 00:09:59.055678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.852 [2024-04-27 00:09:59.055696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.852 [2024-04-27 00:09:59.058934] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.852 [2024-04-27 00:09:59.059076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.852 [2024-04-27 00:09:59.059095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.852 [2024-04-27 00:09:59.062269] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.852 [2024-04-27 00:09:59.062399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.852 [2024-04-27 00:09:59.062418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.852 [2024-04-27 00:09:59.065601] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:28.852 [2024-04-27 00:09:59.065730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.852 [2024-04-27 00:09:59.065748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.070515] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.070661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.070680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.074497] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.074824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.074849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.078279] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.078416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.078434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.082194] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.082477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.082497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.086063] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.086222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.086244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.090863] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.091105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.091126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.097497] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.097611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.097630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.101267] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.101381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.101400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.104672] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.104784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.104802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.108022] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.108136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.108155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.111398] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.111510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.111528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.115108] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.115226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.115244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.119538] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.119651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.119670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.123391] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.123508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.123527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.126766] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.126883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.126902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.130134] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.130247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.130265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.133520] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.133630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.133648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.137154] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.114 [2024-04-27 00:09:59.137281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.114 [2024-04-27 00:09:59.137300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.114 [2024-04-27 00:09:59.142281] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.142408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.142427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.148276] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.148417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.148436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.152681] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.152832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.152857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.157973] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.158091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.158109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.165944] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.166074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.166093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.171654] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.171882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.171901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.175437] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.175569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.175589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.179046] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.179227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.179246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.183367] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.183531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.183549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.189800] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.190174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.190194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.199165] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.199463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.199483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.207815] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.208236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.208257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.215543] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.215747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.215769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.223091] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.223328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.223347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.232565] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.232910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.232930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.240310] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.240434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.240453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.245154] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.245298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.245316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.249499] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.249634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.249653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.253050] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.253169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.253187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.256588] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.256720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.256739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.259972] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.260088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.260107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.263407] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.263530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.263549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.266758] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.266908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.266927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.270091] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.270202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.270220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.273400] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.115 [2024-04-27 00:09:59.273513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.115 [2024-04-27 00:09:59.273531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.115 [2024-04-27 00:09:59.276689] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.116 [2024-04-27 00:09:59.276802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.116 [2024-04-27 00:09:59.276821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.116 [2024-04-27 00:09:59.280225] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.116 [2024-04-27 00:09:59.280338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.116 [2024-04-27 00:09:59.280357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.116 [2024-04-27 00:09:59.283671] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.116 [2024-04-27 00:09:59.283790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.116 [2024-04-27 00:09:59.283809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.116 [2024-04-27 00:09:59.289524] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.116 [2024-04-27 00:09:59.289668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.116 [2024-04-27 00:09:59.289687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.116 [2024-04-27 00:09:59.293507] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.116 [2024-04-27 00:09:59.293724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.116 [2024-04-27 00:09:59.293743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.116 [2024-04-27 00:09:59.302953] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.116 [2024-04-27 00:09:59.303215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.116 [2024-04-27 00:09:59.303234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.116 [2024-04-27 00:09:59.312204] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.116 [2024-04-27 00:09:59.312516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.116 [2024-04-27 00:09:59.312536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.116 [2024-04-27 00:09:59.318265] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.116 [2024-04-27 00:09:59.318384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.116 [2024-04-27 00:09:59.318402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.116 [2024-04-27 00:09:59.321888] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.116 [2024-04-27 00:09:59.322009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.116 [2024-04-27 00:09:59.322027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.116 [2024-04-27 00:09:59.325402] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.116 [2024-04-27 00:09:59.325536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.116 [2024-04-27 00:09:59.325554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.116 [2024-04-27 00:09:59.329565] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.116 [2024-04-27 00:09:59.329706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.116 [2024-04-27 00:09:59.329724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.377 [2024-04-27 00:09:59.332936] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.377 [2024-04-27 00:09:59.333053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.377 [2024-04-27 00:09:59.333071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.377 [2024-04-27 00:09:59.336301] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.377 [2024-04-27 00:09:59.336415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.377 [2024-04-27 00:09:59.336434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.377 [2024-04-27 00:09:59.340884] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.377 [2024-04-27 00:09:59.341259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.377 [2024-04-27 00:09:59.341282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.377 [2024-04-27 00:09:59.344828] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.377 [2024-04-27 00:09:59.344950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.377 [2024-04-27 00:09:59.344970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.377 [2024-04-27 00:09:59.348141] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.377 [2024-04-27 00:09:59.348254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.377 [2024-04-27 00:09:59.348273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.377 [2024-04-27 00:09:59.351438] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.377 [2024-04-27 00:09:59.351551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.377 [2024-04-27 00:09:59.351570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.377 [2024-04-27 00:09:59.354716] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.377 [2024-04-27 00:09:59.354829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.377 [2024-04-27 00:09:59.354854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.377 [2024-04-27 00:09:59.358006] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.377 [2024-04-27 00:09:59.358117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.378 [2024-04-27 00:09:59.358135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.378 [2024-04-27 00:09:59.361328] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.378 [2024-04-27 00:09:59.361441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.378 [2024-04-27 00:09:59.361460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.378 [2024-04-27 00:09:59.366813] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.378 [2024-04-27 00:09:59.367112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.378 [2024-04-27 00:09:59.367131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.378 [2024-04-27 00:09:59.373555] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.378 [2024-04-27 00:09:59.373689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.378 [2024-04-27 00:09:59.373708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.378 [2024-04-27 00:09:59.377610] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.378 [2024-04-27 00:09:59.377778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.378 [2024-04-27 00:09:59.377796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.378 [2024-04-27 00:09:59.383945] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.378 [2024-04-27 00:09:59.384187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.378 [2024-04-27 00:09:59.384205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.378 [2024-04-27 00:09:59.392990] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.378 [2024-04-27 00:09:59.393203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.378 [2024-04-27 00:09:59.393222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.378 [2024-04-27 00:09:59.403618] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.378 [2024-04-27 00:09:59.403963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.378 [2024-04-27 00:09:59.403984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.378 [2024-04-27 00:09:59.412815] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.378 [2024-04-27 00:09:59.413160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.378 [2024-04-27 00:09:59.413180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.378 [2024-04-27 00:09:59.422529] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.378 [2024-04-27 00:09:59.422833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.378 [2024-04-27 00:09:59.422857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.378 [2024-04-27 00:09:59.432476] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2365e00) with pdu=0x2000190fef90 00:25:29.378 [2024-04-27 00:09:59.432713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.378 [2024-04-27 00:09:59.432733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.378 00:25:29.378 Latency(us) 00:25:29.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.378 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:29.378 nvme0n1 : 2.01 3905.36 488.17 0.00 0.00 4088.56 1570.13 15837.87 00:25:29.378 =================================================================================================================== 00:25:29.378 Total : 3905.36 488.17 0.00 0.00 4088.56 1570.13 15837.87 00:25:29.378 0 00:25:29.378 00:09:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:29.378 00:09:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:29.378 00:09:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:29.378 | .driver_specific 00:25:29.378 | .nvme_error 00:25:29.378 | .status_code 00:25:29.378 | .command_transient_transport_error' 00:25:29.378 00:09:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:29.639 00:09:59 -- host/digest.sh@71 -- # (( 252 > 0 )) 00:25:29.639 00:09:59 -- host/digest.sh@73 -- # killprocess 541649 00:25:29.639 00:09:59 -- common/autotest_common.sh@936 -- # '[' -z 541649 ']' 00:25:29.639 00:09:59 -- common/autotest_common.sh@940 -- # kill -0 541649 00:25:29.639 00:09:59 -- common/autotest_common.sh@941 -- # uname 00:25:29.639 00:09:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:29.639 00:09:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 541649 00:25:29.639 00:09:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:29.639 00:09:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:29.639 00:09:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 541649' 00:25:29.639 killing process with pid 541649 00:25:29.639 00:09:59 -- common/autotest_common.sh@955 -- # kill 541649 00:25:29.639 Received shutdown signal, test time was about 2.000000 seconds 00:25:29.639 00:25:29.639 Latency(us) 00:25:29.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.639 =================================================================================================================== 00:25:29.639 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:29.639 00:09:59 -- common/autotest_common.sh@960 -- # wait 541649 00:25:29.639 00:09:59 -- host/digest.sh@116 -- # killprocess 539248 00:25:29.639 00:09:59 -- common/autotest_common.sh@936 -- # '[' -z 539248 ']' 00:25:29.639 00:09:59 -- common/autotest_common.sh@940 -- # kill -0 539248 00:25:29.639 00:09:59 -- common/autotest_common.sh@941 -- # uname 00:25:29.639 00:09:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:29.639 00:09:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 539248 00:25:29.900 00:09:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:29.900 00:09:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:29.900 00:09:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 539248' 00:25:29.900 killing process with pid 539248 00:25:29.900 00:09:59 -- common/autotest_common.sh@955 -- # kill 539248 00:25:29.900 00:09:59 -- common/autotest_common.sh@960 -- # wait 539248 00:25:29.900 00:25:29.900 real 0m16.203s 00:25:29.900 user 0m31.662s 00:25:29.900 sys 0m3.355s 00:25:29.900 00:09:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:29.900 00:09:59 -- common/autotest_common.sh@10 -- # set +x 00:25:29.900 ************************************ 00:25:29.900 END TEST nvmf_digest_error 00:25:29.900 ************************************ 00:25:29.900 00:10:00 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:29.900 00:10:00 -- host/digest.sh@150 -- # nvmftestfini 00:25:29.900 00:10:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:29.900 00:10:00 -- nvmf/common.sh@117 -- # sync 00:25:29.900 00:10:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.900 00:10:00 -- nvmf/common.sh@120 -- # set +e 00:25:29.900 00:10:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.900 00:10:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.900 rmmod nvme_tcp 00:25:29.900 rmmod nvme_fabrics 00:25:29.900 rmmod nvme_keyring 00:25:29.900 00:10:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.900 00:10:00 -- nvmf/common.sh@124 -- # set -e 00:25:29.900 00:10:00 -- nvmf/common.sh@125 -- # return 0 00:25:29.900 00:10:00 -- nvmf/common.sh@478 -- # '[' -n 539248 ']' 00:25:29.900 00:10:00 -- nvmf/common.sh@479 -- # killprocess 539248 00:25:29.900 00:10:00 -- common/autotest_common.sh@936 -- # '[' -z 539248 ']' 00:25:29.900 00:10:00 -- common/autotest_common.sh@940 -- # kill -0 539248 00:25:29.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (539248) - No such process 00:25:29.900 00:10:00 -- common/autotest_common.sh@963 -- # echo 'Process with pid 539248 is not found' 00:25:29.900 Process with pid 539248 is not found 00:25:29.900 00:10:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:29.900 00:10:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:29.900 00:10:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:29.900 00:10:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.900 00:10:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.900 00:10:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.900 00:10:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.900 00:10:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.446 00:10:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:32.446 00:25:32.446 real 0m41.550s 00:25:32.446 user 1m4.229s 00:25:32.446 sys 0m11.970s 00:25:32.446 00:10:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:32.446 00:10:02 -- common/autotest_common.sh@10 -- # set +x 00:25:32.446 ************************************ 00:25:32.446 END TEST nvmf_digest 00:25:32.446 ************************************ 00:25:32.446 00:10:02 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:25:32.446 00:10:02 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:25:32.446 00:10:02 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:25:32.446 00:10:02 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:32.446 00:10:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:32.446 00:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:32.446 00:10:02 -- common/autotest_common.sh@10 -- # set +x 00:25:32.446 ************************************ 00:25:32.446 START TEST nvmf_bdevperf 00:25:32.446 ************************************ 00:25:32.446 00:10:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:32.446 * Looking for test storage... 00:25:32.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.446 00:10:02 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.446 00:10:02 -- nvmf/common.sh@7 -- # uname -s 00:25:32.446 00:10:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.446 00:10:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.446 00:10:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.446 00:10:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.446 00:10:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.446 00:10:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.446 00:10:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.446 00:10:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.446 00:10:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.446 00:10:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.446 00:10:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:32.446 00:10:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:32.446 00:10:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.446 00:10:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.446 00:10:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.446 00:10:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.446 00:10:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.446 00:10:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.446 00:10:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.446 00:10:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.446 00:10:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.446 00:10:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.446 00:10:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.446 00:10:02 -- paths/export.sh@5 -- # export PATH 00:25:32.446 00:10:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.446 00:10:02 -- nvmf/common.sh@47 -- # : 0 00:25:32.446 00:10:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.446 00:10:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.446 00:10:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.446 00:10:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.446 00:10:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.446 00:10:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.446 00:10:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.446 00:10:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.446 00:10:02 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:32.446 00:10:02 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:32.446 00:10:02 -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:32.446 00:10:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:32.446 00:10:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.446 00:10:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:32.446 00:10:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:32.446 00:10:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:32.446 00:10:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.446 00:10:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.446 00:10:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.446 00:10:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:32.446 00:10:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:32.446 00:10:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.446 00:10:02 -- common/autotest_common.sh@10 -- # set +x 00:25:40.616 00:10:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:40.616 00:10:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:40.616 00:10:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:40.616 00:10:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:40.616 00:10:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:40.617 00:10:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:40.617 00:10:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:40.617 00:10:09 -- nvmf/common.sh@295 -- # net_devs=() 00:25:40.617 00:10:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:40.617 00:10:09 -- nvmf/common.sh@296 -- # e810=() 00:25:40.617 00:10:09 -- nvmf/common.sh@296 -- # local -ga e810 00:25:40.617 00:10:09 -- nvmf/common.sh@297 -- # x722=() 00:25:40.617 00:10:09 -- nvmf/common.sh@297 -- # local -ga x722 00:25:40.617 00:10:09 -- nvmf/common.sh@298 -- # mlx=() 00:25:40.617 00:10:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:40.617 00:10:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.617 00:10:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.617 00:10:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.617 00:10:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.617 00:10:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.617 00:10:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.617 00:10:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.617 00:10:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.617 00:10:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.617 00:10:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.617 00:10:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.617 00:10:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:40.617 00:10:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:40.617 00:10:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:40.617 00:10:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.617 00:10:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:40.617 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:40.617 00:10:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.617 00:10:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:40.617 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:40.617 00:10:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:40.617 00:10:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.617 00:10:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.617 00:10:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:40.617 00:10:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.617 00:10:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:40.617 Found net devices under 0000:31:00.0: cvl_0_0 00:25:40.617 00:10:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.617 00:10:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.617 00:10:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.617 00:10:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:40.617 00:10:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.617 00:10:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:40.617 Found net devices under 0000:31:00.1: cvl_0_1 00:25:40.617 00:10:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.617 00:10:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:40.617 00:10:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:40.617 00:10:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:40.617 00:10:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.617 00:10:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.617 00:10:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.617 00:10:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:40.617 00:10:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.617 00:10:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.617 00:10:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:40.617 00:10:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.617 00:10:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.617 00:10:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:40.617 00:10:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:40.617 00:10:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.617 00:10:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.617 00:10:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.617 00:10:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.617 00:10:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:40.617 00:10:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.617 00:10:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.617 00:10:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.617 00:10:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:40.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:25:40.617 00:25:40.617 --- 10.0.0.2 ping statistics --- 00:25:40.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.617 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:25:40.617 00:10:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:25:40.617 00:25:40.617 --- 10.0.0.1 ping statistics --- 00:25:40.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.617 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:25:40.617 00:10:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.617 00:10:09 -- nvmf/common.sh@411 -- # return 0 00:25:40.617 00:10:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:40.617 00:10:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.617 00:10:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:40.617 00:10:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.617 00:10:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:40.617 00:10:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:40.617 00:10:09 -- host/bdevperf.sh@25 -- # tgt_init 00:25:40.617 00:10:09 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:40.617 00:10:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:40.617 00:10:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:40.617 00:10:09 -- common/autotest_common.sh@10 -- # set +x 00:25:40.617 00:10:09 -- nvmf/common.sh@470 -- # nvmfpid=546737 00:25:40.617 00:10:09 -- nvmf/common.sh@471 -- # waitforlisten 546737 00:25:40.617 00:10:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:40.617 00:10:09 -- common/autotest_common.sh@817 -- # '[' -z 546737 ']' 00:25:40.617 00:10:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.617 00:10:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:40.617 00:10:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.617 00:10:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:40.617 00:10:09 -- common/autotest_common.sh@10 -- # set +x 00:25:40.617 [2024-04-27 00:10:09.878057] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:40.617 [2024-04-27 00:10:09.878130] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.617 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.617 [2024-04-27 00:10:09.949661] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:40.617 [2024-04-27 00:10:10.023887] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.617 [2024-04-27 00:10:10.023927] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.617 [2024-04-27 00:10:10.023934] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.617 [2024-04-27 00:10:10.023941] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.617 [2024-04-27 00:10:10.023946] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.617 [2024-04-27 00:10:10.024212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.617 [2024-04-27 00:10:10.024431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.617 [2024-04-27 00:10:10.024431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.617 00:10:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:40.617 00:10:10 -- common/autotest_common.sh@850 -- # return 0 00:25:40.617 00:10:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:40.618 00:10:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:40.618 00:10:10 -- common/autotest_common.sh@10 -- # set +x 00:25:40.618 00:10:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.618 00:10:10 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.618 00:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.618 00:10:10 -- common/autotest_common.sh@10 -- # set +x 00:25:40.618 [2024-04-27 00:10:10.697201] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.618 00:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.618 00:10:10 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:40.618 00:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.618 00:10:10 -- common/autotest_common.sh@10 -- # set +x 00:25:40.618 Malloc0 00:25:40.618 00:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.618 00:10:10 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:40.618 00:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.618 00:10:10 -- common/autotest_common.sh@10 -- # set +x 00:25:40.618 00:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.618 00:10:10 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:40.618 00:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.618 00:10:10 -- common/autotest_common.sh@10 -- # set +x 00:25:40.618 00:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.618 00:10:10 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.618 00:10:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.618 00:10:10 -- common/autotest_common.sh@10 -- # set +x 00:25:40.618 [2024-04-27 00:10:10.766172] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.618 00:10:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.618 00:10:10 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:40.618 00:10:10 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:40.618 00:10:10 -- nvmf/common.sh@521 -- # config=() 00:25:40.618 00:10:10 -- nvmf/common.sh@521 -- # local subsystem config 00:25:40.618 00:10:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:40.618 00:10:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:40.618 { 00:25:40.618 "params": { 00:25:40.618 "name": "Nvme$subsystem", 00:25:40.618 "trtype": "$TEST_TRANSPORT", 00:25:40.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:40.618 "adrfam": "ipv4", 00:25:40.618 "trsvcid": "$NVMF_PORT", 00:25:40.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:40.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:40.618 "hdgst": ${hdgst:-false}, 00:25:40.618 "ddgst": ${ddgst:-false} 00:25:40.618 }, 00:25:40.618 "method": "bdev_nvme_attach_controller" 00:25:40.618 } 00:25:40.618 EOF 00:25:40.618 )") 00:25:40.618 00:10:10 -- nvmf/common.sh@543 -- # cat 00:25:40.618 00:10:10 -- nvmf/common.sh@545 -- # jq . 00:25:40.618 00:10:10 -- nvmf/common.sh@546 -- # IFS=, 00:25:40.618 00:10:10 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:40.618 "params": { 00:25:40.618 "name": "Nvme1", 00:25:40.618 "trtype": "tcp", 00:25:40.618 "traddr": "10.0.0.2", 00:25:40.618 "adrfam": "ipv4", 00:25:40.618 "trsvcid": "4420", 00:25:40.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:40.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:40.618 "hdgst": false, 00:25:40.618 "ddgst": false 00:25:40.618 }, 00:25:40.618 "method": "bdev_nvme_attach_controller" 00:25:40.618 }' 00:25:40.618 [2024-04-27 00:10:10.819699] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:40.618 [2024-04-27 00:10:10.819747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid546769 ] 00:25:40.880 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.880 [2024-04-27 00:10:10.879931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.880 [2024-04-27 00:10:10.944080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.141 Running I/O for 1 seconds... 00:25:42.084 00:25:42.084 Latency(us) 00:25:42.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.084 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:42.084 Verification LBA range: start 0x0 length 0x4000 00:25:42.084 Nvme1n1 : 1.01 8988.02 35.11 0.00 0.00 14178.36 3181.23 14636.37 00:25:42.084 =================================================================================================================== 00:25:42.084 Total : 8988.02 35.11 0.00 0.00 14178.36 3181.23 14636.37 00:25:42.344 00:10:12 -- host/bdevperf.sh@30 -- # bdevperfpid=547112 00:25:42.344 00:10:12 -- host/bdevperf.sh@32 -- # sleep 3 00:25:42.344 00:10:12 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:42.344 00:10:12 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:42.344 00:10:12 -- nvmf/common.sh@521 -- # config=() 00:25:42.344 00:10:12 -- nvmf/common.sh@521 -- # local subsystem config 00:25:42.344 00:10:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:42.344 00:10:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:42.344 { 00:25:42.344 "params": { 00:25:42.344 "name": "Nvme$subsystem", 00:25:42.344 "trtype": "$TEST_TRANSPORT", 00:25:42.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.344 "adrfam": "ipv4", 00:25:42.344 "trsvcid": "$NVMF_PORT", 00:25:42.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.344 "hdgst": ${hdgst:-false}, 00:25:42.344 "ddgst": ${ddgst:-false} 00:25:42.344 }, 00:25:42.344 "method": "bdev_nvme_attach_controller" 00:25:42.344 } 00:25:42.344 EOF 00:25:42.344 )") 00:25:42.344 00:10:12 -- nvmf/common.sh@543 -- # cat 00:25:42.344 00:10:12 -- nvmf/common.sh@545 -- # jq . 00:25:42.344 00:10:12 -- nvmf/common.sh@546 -- # IFS=, 00:25:42.344 00:10:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:42.344 "params": { 00:25:42.344 "name": "Nvme1", 00:25:42.344 "trtype": "tcp", 00:25:42.344 "traddr": "10.0.0.2", 00:25:42.344 "adrfam": "ipv4", 00:25:42.344 "trsvcid": "4420", 00:25:42.344 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:42.344 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:42.344 "hdgst": false, 00:25:42.344 "ddgst": false 00:25:42.344 }, 00:25:42.344 "method": "bdev_nvme_attach_controller" 00:25:42.344 }' 00:25:42.345 [2024-04-27 00:10:12.401219] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:42.345 [2024-04-27 00:10:12.401272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547112 ] 00:25:42.345 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.345 [2024-04-27 00:10:12.460684] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.345 [2024-04-27 00:10:12.523848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.604 Running I/O for 15 seconds... 00:25:45.151 00:10:15 -- host/bdevperf.sh@33 -- # kill -9 546737 00:25:45.151 00:10:15 -- host/bdevperf.sh@35 -- # sleep 3 00:25:45.151 [2024-04-27 00:10:15.370360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.151 [2024-04-27 00:10:15.370846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.151 [2024-04-27 00:10:15.370857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.415 [2024-04-27 00:10:15.370866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.415 [2024-04-27 00:10:15.370877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.415 [2024-04-27 00:10:15.370885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.415 [2024-04-27 00:10:15.370895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.415 [2024-04-27 00:10:15.370903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.415 [2024-04-27 00:10:15.370914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.415 [2024-04-27 00:10:15.370926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.415 [2024-04-27 00:10:15.370935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.415 [2024-04-27 00:10:15.370943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.415 [2024-04-27 00:10:15.370952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.415 [2024-04-27 00:10:15.370959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.415 [2024-04-27 00:10:15.370968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.415 [2024-04-27 00:10:15.370976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.370985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.416 [2024-04-27 00:10:15.370993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.416 [2024-04-27 00:10:15.371010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.416 [2024-04-27 00:10:15.371028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.416 [2024-04-27 00:10:15.371044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.416 [2024-04-27 00:10:15.371061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.416 [2024-04-27 00:10:15.371079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.416 [2024-04-27 00:10:15.371096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.416 [2024-04-27 00:10:15.371113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.416 [2024-04-27 00:10:15.371130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.416 [2024-04-27 00:10:15.371148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.416 [2024-04-27 00:10:15.371563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.416 [2024-04-27 00:10:15.371570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.371991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.371998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.372007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.372014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.372024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.372031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.372040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.372047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.372057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.372064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.372073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.372080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.372089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.372096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.372105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.372111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.372121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.372129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.372138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.417 [2024-04-27 00:10:15.372146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.417 [2024-04-27 00:10:15.372155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.418 [2024-04-27 00:10:15.372501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.418 [2024-04-27 00:10:15.372517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.418 [2024-04-27 00:10:15.372533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.418 [2024-04-27 00:10:15.372550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.418 [2024-04-27 00:10:15.372565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.418 [2024-04-27 00:10:15.372582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.418 [2024-04-27 00:10:15.372598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb956b0 is same with the state(5) to be set 00:25:45.418 [2024-04-27 00:10:15.372615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:45.418 [2024-04-27 00:10:15.372620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:45.418 [2024-04-27 00:10:15.372626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59920 len:8 PRP1 0x0 PRP2 0x0 00:25:45.418 [2024-04-27 00:10:15.372634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.418 [2024-04-27 00:10:15.372673] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb956b0 was disconnected and freed. reset controller. 00:25:45.418 [2024-04-27 00:10:15.376208] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.418 [2024-04-27 00:10:15.376255] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.418 [2024-04-27 00:10:15.377065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.418 [2024-04-27 00:10:15.377408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.418 [2024-04-27 00:10:15.377422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.418 [2024-04-27 00:10:15.377432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.418 [2024-04-27 00:10:15.377669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.418 [2024-04-27 00:10:15.377895] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.418 [2024-04-27 00:10:15.377905] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.418 [2024-04-27 00:10:15.377913] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.418 [2024-04-27 00:10:15.381391] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.418 [2024-04-27 00:10:15.390206] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.418 [2024-04-27 00:10:15.390740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.391098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.391113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.419 [2024-04-27 00:10:15.391122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.419 [2024-04-27 00:10:15.391358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.419 [2024-04-27 00:10:15.391575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.419 [2024-04-27 00:10:15.391584] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.419 [2024-04-27 00:10:15.391592] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.419 [2024-04-27 00:10:15.395073] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.419 [2024-04-27 00:10:15.404089] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.419 [2024-04-27 00:10:15.404647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.404964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.404974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.419 [2024-04-27 00:10:15.404982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.419 [2024-04-27 00:10:15.405198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.419 [2024-04-27 00:10:15.405412] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.419 [2024-04-27 00:10:15.405421] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.419 [2024-04-27 00:10:15.405428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.419 [2024-04-27 00:10:15.408917] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.419 [2024-04-27 00:10:15.417929] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.419 [2024-04-27 00:10:15.418562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.418911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.418933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.419 [2024-04-27 00:10:15.418943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.419 [2024-04-27 00:10:15.419177] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.419 [2024-04-27 00:10:15.419395] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.419 [2024-04-27 00:10:15.419406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.419 [2024-04-27 00:10:15.419413] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.419 [2024-04-27 00:10:15.422895] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.419 [2024-04-27 00:10:15.431703] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.419 [2024-04-27 00:10:15.432285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.432638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.432651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.419 [2024-04-27 00:10:15.432660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.419 [2024-04-27 00:10:15.432901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.419 [2024-04-27 00:10:15.433119] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.419 [2024-04-27 00:10:15.433127] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.419 [2024-04-27 00:10:15.433135] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.419 [2024-04-27 00:10:15.436618] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.419 [2024-04-27 00:10:15.445431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.419 [2024-04-27 00:10:15.446117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.446468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.446482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.419 [2024-04-27 00:10:15.446492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.419 [2024-04-27 00:10:15.446726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.419 [2024-04-27 00:10:15.446954] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.419 [2024-04-27 00:10:15.446962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.419 [2024-04-27 00:10:15.446970] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.419 [2024-04-27 00:10:15.450447] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.419 [2024-04-27 00:10:15.459254] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.419 [2024-04-27 00:10:15.459897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.460318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.460331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.419 [2024-04-27 00:10:15.460344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.419 [2024-04-27 00:10:15.460578] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.419 [2024-04-27 00:10:15.460796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.419 [2024-04-27 00:10:15.460804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.419 [2024-04-27 00:10:15.460811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.419 [2024-04-27 00:10:15.464297] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.419 [2024-04-27 00:10:15.473104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.419 [2024-04-27 00:10:15.473765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.474077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.474092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.419 [2024-04-27 00:10:15.474102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.419 [2024-04-27 00:10:15.474336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.419 [2024-04-27 00:10:15.474554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.419 [2024-04-27 00:10:15.474561] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.419 [2024-04-27 00:10:15.474569] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.419 [2024-04-27 00:10:15.478050] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.419 [2024-04-27 00:10:15.486859] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.419 [2024-04-27 00:10:15.487520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.487886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.487900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.419 [2024-04-27 00:10:15.487910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.419 [2024-04-27 00:10:15.488143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.419 [2024-04-27 00:10:15.488361] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.419 [2024-04-27 00:10:15.488369] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.419 [2024-04-27 00:10:15.488376] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.419 [2024-04-27 00:10:15.491858] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.419 [2024-04-27 00:10:15.500665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.419 [2024-04-27 00:10:15.501340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.501690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.419 [2024-04-27 00:10:15.501703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.419 [2024-04-27 00:10:15.501712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.420 [2024-04-27 00:10:15.501958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.420 [2024-04-27 00:10:15.502177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.420 [2024-04-27 00:10:15.502186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.420 [2024-04-27 00:10:15.502193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.420 [2024-04-27 00:10:15.505677] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.420 [2024-04-27 00:10:15.514504] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.420 [2024-04-27 00:10:15.514957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.515405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.515418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.420 [2024-04-27 00:10:15.515427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.420 [2024-04-27 00:10:15.515661] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.420 [2024-04-27 00:10:15.515887] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.420 [2024-04-27 00:10:15.515896] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.420 [2024-04-27 00:10:15.515904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.420 [2024-04-27 00:10:15.519381] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.420 [2024-04-27 00:10:15.528392] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.420 [2024-04-27 00:10:15.529125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.529478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.529491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.420 [2024-04-27 00:10:15.529500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.420 [2024-04-27 00:10:15.529734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.420 [2024-04-27 00:10:15.529959] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.420 [2024-04-27 00:10:15.529968] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.420 [2024-04-27 00:10:15.529976] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.420 [2024-04-27 00:10:15.533461] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.420 [2024-04-27 00:10:15.542279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.420 [2024-04-27 00:10:15.542940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.543291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.543303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.420 [2024-04-27 00:10:15.543313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.420 [2024-04-27 00:10:15.543547] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.420 [2024-04-27 00:10:15.543764] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.420 [2024-04-27 00:10:15.543777] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.420 [2024-04-27 00:10:15.543785] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.420 [2024-04-27 00:10:15.547271] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.420 [2024-04-27 00:10:15.556083] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.420 [2024-04-27 00:10:15.556729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.557117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.557131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.420 [2024-04-27 00:10:15.557140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.420 [2024-04-27 00:10:15.557374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.420 [2024-04-27 00:10:15.557592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.420 [2024-04-27 00:10:15.557600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.420 [2024-04-27 00:10:15.557607] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.420 [2024-04-27 00:10:15.561088] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.420 [2024-04-27 00:10:15.569896] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.420 [2024-04-27 00:10:15.570554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.570818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.570831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.420 [2024-04-27 00:10:15.570849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.420 [2024-04-27 00:10:15.571083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.420 [2024-04-27 00:10:15.571302] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.420 [2024-04-27 00:10:15.571310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.420 [2024-04-27 00:10:15.571317] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.420 [2024-04-27 00:10:15.574793] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.420 [2024-04-27 00:10:15.583806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.420 [2024-04-27 00:10:15.584455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.584810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.584822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.420 [2024-04-27 00:10:15.584831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.420 [2024-04-27 00:10:15.585073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.420 [2024-04-27 00:10:15.585292] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.420 [2024-04-27 00:10:15.585300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.420 [2024-04-27 00:10:15.585311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.420 [2024-04-27 00:10:15.588789] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.420 [2024-04-27 00:10:15.597598] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.420 [2024-04-27 00:10:15.598267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.598614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.420 [2024-04-27 00:10:15.598627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.420 [2024-04-27 00:10:15.598636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.421 [2024-04-27 00:10:15.598880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.421 [2024-04-27 00:10:15.599098] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.421 [2024-04-27 00:10:15.599107] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.421 [2024-04-27 00:10:15.599114] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.421 [2024-04-27 00:10:15.602596] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.421 [2024-04-27 00:10:15.611424] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.421 [2024-04-27 00:10:15.612114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.421 [2024-04-27 00:10:15.612462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.421 [2024-04-27 00:10:15.612475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.421 [2024-04-27 00:10:15.612484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.421 [2024-04-27 00:10:15.612718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.421 [2024-04-27 00:10:15.612941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.421 [2024-04-27 00:10:15.612950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.421 [2024-04-27 00:10:15.612957] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.421 [2024-04-27 00:10:15.616433] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.421 [2024-04-27 00:10:15.625246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.421 [2024-04-27 00:10:15.625812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.421 [2024-04-27 00:10:15.626185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.421 [2024-04-27 00:10:15.626195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.421 [2024-04-27 00:10:15.626203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.421 [2024-04-27 00:10:15.626418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.421 [2024-04-27 00:10:15.626632] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.421 [2024-04-27 00:10:15.626640] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.421 [2024-04-27 00:10:15.626647] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.421 [2024-04-27 00:10:15.630133] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.682 [2024-04-27 00:10:15.639171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.682 [2024-04-27 00:10:15.639730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.682 [2024-04-27 00:10:15.640083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.640094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.683 [2024-04-27 00:10:15.640101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.683 [2024-04-27 00:10:15.640316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.683 [2024-04-27 00:10:15.640530] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.683 [2024-04-27 00:10:15.640538] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.683 [2024-04-27 00:10:15.640544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.683 [2024-04-27 00:10:15.644026] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.683 [2024-04-27 00:10:15.653041] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.683 [2024-04-27 00:10:15.653587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.653818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.653828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.683 [2024-04-27 00:10:15.653840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.683 [2024-04-27 00:10:15.654056] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.683 [2024-04-27 00:10:15.654270] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.683 [2024-04-27 00:10:15.654277] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.683 [2024-04-27 00:10:15.654284] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.683 [2024-04-27 00:10:15.657754] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.683 [2024-04-27 00:10:15.666764] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.683 [2024-04-27 00:10:15.667396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.667746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.667759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.683 [2024-04-27 00:10:15.667768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.683 [2024-04-27 00:10:15.668010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.683 [2024-04-27 00:10:15.668229] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.683 [2024-04-27 00:10:15.668238] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.683 [2024-04-27 00:10:15.668245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.683 [2024-04-27 00:10:15.671721] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.683 [2024-04-27 00:10:15.680533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.683 [2024-04-27 00:10:15.681203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.681562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.681574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.683 [2024-04-27 00:10:15.681583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.683 [2024-04-27 00:10:15.681816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.683 [2024-04-27 00:10:15.682043] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.683 [2024-04-27 00:10:15.682052] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.683 [2024-04-27 00:10:15.682060] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.683 [2024-04-27 00:10:15.685537] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.683 [2024-04-27 00:10:15.694378] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.683 [2024-04-27 00:10:15.695034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.695386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.695399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.683 [2024-04-27 00:10:15.695408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.683 [2024-04-27 00:10:15.695641] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.683 [2024-04-27 00:10:15.695868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.683 [2024-04-27 00:10:15.695878] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.683 [2024-04-27 00:10:15.695885] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.683 [2024-04-27 00:10:15.699362] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.683 [2024-04-27 00:10:15.708180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.683 [2024-04-27 00:10:15.708849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.709206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.709219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.683 [2024-04-27 00:10:15.709228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.683 [2024-04-27 00:10:15.709462] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.683 [2024-04-27 00:10:15.709680] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.683 [2024-04-27 00:10:15.709688] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.683 [2024-04-27 00:10:15.709696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.683 [2024-04-27 00:10:15.713187] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.683 [2024-04-27 00:10:15.722004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.683 [2024-04-27 00:10:15.722698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.722921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.722935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.683 [2024-04-27 00:10:15.722944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.683 [2024-04-27 00:10:15.723178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.683 [2024-04-27 00:10:15.723397] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.683 [2024-04-27 00:10:15.723405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.683 [2024-04-27 00:10:15.723412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.683 [2024-04-27 00:10:15.726894] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.683 [2024-04-27 00:10:15.735916] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.683 [2024-04-27 00:10:15.736587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.736939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.736953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.683 [2024-04-27 00:10:15.736962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.683 [2024-04-27 00:10:15.737196] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.683 [2024-04-27 00:10:15.737414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.683 [2024-04-27 00:10:15.737428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.683 [2024-04-27 00:10:15.737435] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.683 [2024-04-27 00:10:15.740917] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.683 [2024-04-27 00:10:15.749722] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.683 [2024-04-27 00:10:15.750348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.750698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.683 [2024-04-27 00:10:15.750710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.684 [2024-04-27 00:10:15.750719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.684 [2024-04-27 00:10:15.750962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.684 [2024-04-27 00:10:15.751180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.684 [2024-04-27 00:10:15.751189] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.684 [2024-04-27 00:10:15.751196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.684 [2024-04-27 00:10:15.754672] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.684 [2024-04-27 00:10:15.763478] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.684 [2024-04-27 00:10:15.764140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.764365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.764383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.684 [2024-04-27 00:10:15.764393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.684 [2024-04-27 00:10:15.764626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.684 [2024-04-27 00:10:15.764856] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.684 [2024-04-27 00:10:15.764865] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.684 [2024-04-27 00:10:15.764873] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.684 [2024-04-27 00:10:15.768353] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.684 [2024-04-27 00:10:15.777368] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.684 [2024-04-27 00:10:15.778037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.778330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.778343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.684 [2024-04-27 00:10:15.778353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.684 [2024-04-27 00:10:15.778587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.684 [2024-04-27 00:10:15.778805] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.684 [2024-04-27 00:10:15.778813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.684 [2024-04-27 00:10:15.778820] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.684 [2024-04-27 00:10:15.782308] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.684 [2024-04-27 00:10:15.791113] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.684 [2024-04-27 00:10:15.791667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.792020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.792033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.684 [2024-04-27 00:10:15.792043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.684 [2024-04-27 00:10:15.792277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.684 [2024-04-27 00:10:15.792494] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.684 [2024-04-27 00:10:15.792503] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.684 [2024-04-27 00:10:15.792510] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.684 [2024-04-27 00:10:15.795991] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.684 [2024-04-27 00:10:15.805004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.684 [2024-04-27 00:10:15.805688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.805916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.805931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.684 [2024-04-27 00:10:15.805945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.684 [2024-04-27 00:10:15.806179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.684 [2024-04-27 00:10:15.806397] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.684 [2024-04-27 00:10:15.806405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.684 [2024-04-27 00:10:15.806412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.684 [2024-04-27 00:10:15.809905] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.684 [2024-04-27 00:10:15.818784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.684 [2024-04-27 00:10:15.819447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.819799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.819812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.684 [2024-04-27 00:10:15.819822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.684 [2024-04-27 00:10:15.820065] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.684 [2024-04-27 00:10:15.820283] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.684 [2024-04-27 00:10:15.820293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.684 [2024-04-27 00:10:15.820300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.684 [2024-04-27 00:10:15.823779] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.684 [2024-04-27 00:10:15.832599] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.684 [2024-04-27 00:10:15.833223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.833571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.833583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.684 [2024-04-27 00:10:15.833593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.684 [2024-04-27 00:10:15.833826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.684 [2024-04-27 00:10:15.834052] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.684 [2024-04-27 00:10:15.834061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.684 [2024-04-27 00:10:15.834069] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.684 [2024-04-27 00:10:15.837551] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.684 [2024-04-27 00:10:15.846358] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.684 [2024-04-27 00:10:15.847043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.847391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.847404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.684 [2024-04-27 00:10:15.847413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.684 [2024-04-27 00:10:15.847651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.684 [2024-04-27 00:10:15.847878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.684 [2024-04-27 00:10:15.847888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.684 [2024-04-27 00:10:15.847896] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.684 [2024-04-27 00:10:15.851372] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.684 [2024-04-27 00:10:15.860184] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.684 [2024-04-27 00:10:15.860883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.861294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.684 [2024-04-27 00:10:15.861307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.684 [2024-04-27 00:10:15.861316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.684 [2024-04-27 00:10:15.861549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.684 [2024-04-27 00:10:15.861767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.684 [2024-04-27 00:10:15.861775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.684 [2024-04-27 00:10:15.861783] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.684 [2024-04-27 00:10:15.865268] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.684 [2024-04-27 00:10:15.874078] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.684 [2024-04-27 00:10:15.874770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.685 [2024-04-27 00:10:15.875123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.685 [2024-04-27 00:10:15.875137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.685 [2024-04-27 00:10:15.875146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.685 [2024-04-27 00:10:15.875380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.685 [2024-04-27 00:10:15.875598] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.685 [2024-04-27 00:10:15.875606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.685 [2024-04-27 00:10:15.875614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.685 [2024-04-27 00:10:15.879193] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.685 [2024-04-27 00:10:15.887814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.685 [2024-04-27 00:10:15.888373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.685 [2024-04-27 00:10:15.888700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.685 [2024-04-27 00:10:15.888710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.685 [2024-04-27 00:10:15.888717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.685 [2024-04-27 00:10:15.888937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.685 [2024-04-27 00:10:15.889157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.685 [2024-04-27 00:10:15.889164] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.685 [2024-04-27 00:10:15.889171] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.685 [2024-04-27 00:10:15.892644] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.946 [2024-04-27 00:10:15.901660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.946 [2024-04-27 00:10:15.902307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.946 [2024-04-27 00:10:15.902655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.946 [2024-04-27 00:10:15.902667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.946 [2024-04-27 00:10:15.902677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.946 [2024-04-27 00:10:15.902921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.946 [2024-04-27 00:10:15.903139] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.946 [2024-04-27 00:10:15.903148] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.946 [2024-04-27 00:10:15.903156] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.946 [2024-04-27 00:10:15.906634] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.946 [2024-04-27 00:10:15.915458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.946 [2024-04-27 00:10:15.916120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.946 [2024-04-27 00:10:15.916471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.946 [2024-04-27 00:10:15.916483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.946 [2024-04-27 00:10:15.916492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.946 [2024-04-27 00:10:15.916726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.946 [2024-04-27 00:10:15.916953] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.946 [2024-04-27 00:10:15.916963] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.946 [2024-04-27 00:10:15.916970] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.946 [2024-04-27 00:10:15.920454] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.946 [2024-04-27 00:10:15.929269] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.946 [2024-04-27 00:10:15.929959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.946 [2024-04-27 00:10:15.930349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.946 [2024-04-27 00:10:15.930362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.946 [2024-04-27 00:10:15.930371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.946 [2024-04-27 00:10:15.930605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.946 [2024-04-27 00:10:15.930823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.946 [2024-04-27 00:10:15.930845] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.946 [2024-04-27 00:10:15.930853] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.946 [2024-04-27 00:10:15.934336] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.946 [2024-04-27 00:10:15.943153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.946 [2024-04-27 00:10:15.943753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.946 [2024-04-27 00:10:15.944098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.946 [2024-04-27 00:10:15.944108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.946 [2024-04-27 00:10:15.944116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.946 [2024-04-27 00:10:15.944331] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.946 [2024-04-27 00:10:15.944545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.946 [2024-04-27 00:10:15.944554] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.946 [2024-04-27 00:10:15.944561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.946 [2024-04-27 00:10:15.948034] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.946 [2024-04-27 00:10:15.957043] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.946 [2024-04-27 00:10:15.957668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.946 [2024-04-27 00:10:15.958025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.946 [2024-04-27 00:10:15.958039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.946 [2024-04-27 00:10:15.958049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.946 [2024-04-27 00:10:15.958282] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.946 [2024-04-27 00:10:15.958500] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.946 [2024-04-27 00:10:15.958509] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.946 [2024-04-27 00:10:15.958516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.946 [2024-04-27 00:10:15.961996] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.946 [2024-04-27 00:10:15.970801] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.946 [2024-04-27 00:10:15.971494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:15.971852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:15.971866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.947 [2024-04-27 00:10:15.971875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.947 [2024-04-27 00:10:15.972109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.947 [2024-04-27 00:10:15.972327] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.947 [2024-04-27 00:10:15.972336] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.947 [2024-04-27 00:10:15.972348] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.947 [2024-04-27 00:10:15.975827] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.947 [2024-04-27 00:10:15.984636] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.947 [2024-04-27 00:10:15.985305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:15.985656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:15.985668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.947 [2024-04-27 00:10:15.985678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.947 [2024-04-27 00:10:15.985920] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.947 [2024-04-27 00:10:15.986139] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.947 [2024-04-27 00:10:15.986148] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.947 [2024-04-27 00:10:15.986156] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.947 [2024-04-27 00:10:15.989634] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.947 [2024-04-27 00:10:15.998442] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.947 [2024-04-27 00:10:15.999115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:15.999459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:15.999473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.947 [2024-04-27 00:10:15.999482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.947 [2024-04-27 00:10:15.999715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.947 [2024-04-27 00:10:15.999941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.947 [2024-04-27 00:10:15.999950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.947 [2024-04-27 00:10:15.999957] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.947 [2024-04-27 00:10:16.003434] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.947 [2024-04-27 00:10:16.012248] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.947 [2024-04-27 00:10:16.012944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.013296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.013308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.947 [2024-04-27 00:10:16.013317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.947 [2024-04-27 00:10:16.013551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.947 [2024-04-27 00:10:16.013769] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.947 [2024-04-27 00:10:16.013778] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.947 [2024-04-27 00:10:16.013785] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.947 [2024-04-27 00:10:16.017275] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.947 [2024-04-27 00:10:16.026087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.947 [2024-04-27 00:10:16.026691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.027019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.027030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.947 [2024-04-27 00:10:16.027037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.947 [2024-04-27 00:10:16.027253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.947 [2024-04-27 00:10:16.027467] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.947 [2024-04-27 00:10:16.027475] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.947 [2024-04-27 00:10:16.027482] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.947 [2024-04-27 00:10:16.030959] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.947 [2024-04-27 00:10:16.039992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.947 [2024-04-27 00:10:16.040584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.040906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.040916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.947 [2024-04-27 00:10:16.040924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.947 [2024-04-27 00:10:16.041139] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.947 [2024-04-27 00:10:16.041353] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.947 [2024-04-27 00:10:16.041361] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.947 [2024-04-27 00:10:16.041368] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.947 [2024-04-27 00:10:16.044840] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.947 [2024-04-27 00:10:16.053849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.947 [2024-04-27 00:10:16.054503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.054860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.054875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.947 [2024-04-27 00:10:16.054884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.947 [2024-04-27 00:10:16.055118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.947 [2024-04-27 00:10:16.055336] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.947 [2024-04-27 00:10:16.055345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.947 [2024-04-27 00:10:16.055353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.947 [2024-04-27 00:10:16.058842] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.947 [2024-04-27 00:10:16.067661] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.947 [2024-04-27 00:10:16.068263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.068477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.068487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.947 [2024-04-27 00:10:16.068495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.947 [2024-04-27 00:10:16.068710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.947 [2024-04-27 00:10:16.068931] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.947 [2024-04-27 00:10:16.068939] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.947 [2024-04-27 00:10:16.068946] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.947 [2024-04-27 00:10:16.072434] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.947 [2024-04-27 00:10:16.081460] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.947 [2024-04-27 00:10:16.082112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.082463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.947 [2024-04-27 00:10:16.082476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.947 [2024-04-27 00:10:16.082485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.947 [2024-04-27 00:10:16.082719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.947 [2024-04-27 00:10:16.082942] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.947 [2024-04-27 00:10:16.082951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.947 [2024-04-27 00:10:16.082959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.948 [2024-04-27 00:10:16.086441] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.948 [2024-04-27 00:10:16.095261] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.948 [2024-04-27 00:10:16.095912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.948 [2024-04-27 00:10:16.096260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.948 [2024-04-27 00:10:16.096273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.948 [2024-04-27 00:10:16.096283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.948 [2024-04-27 00:10:16.096517] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.948 [2024-04-27 00:10:16.096735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.948 [2024-04-27 00:10:16.096743] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.948 [2024-04-27 00:10:16.096751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.948 [2024-04-27 00:10:16.100236] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.948 [2024-04-27 00:10:16.109066] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.948 [2024-04-27 00:10:16.109552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.948 [2024-04-27 00:10:16.109915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.948 [2024-04-27 00:10:16.109927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.948 [2024-04-27 00:10:16.109934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.948 [2024-04-27 00:10:16.110150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.948 [2024-04-27 00:10:16.110364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.948 [2024-04-27 00:10:16.110372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.948 [2024-04-27 00:10:16.110379] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.948 [2024-04-27 00:10:16.113856] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.948 [2024-04-27 00:10:16.122868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.948 [2024-04-27 00:10:16.123531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.948 [2024-04-27 00:10:16.123791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.948 [2024-04-27 00:10:16.123812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.948 [2024-04-27 00:10:16.123821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.948 [2024-04-27 00:10:16.124062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.948 [2024-04-27 00:10:16.124282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.948 [2024-04-27 00:10:16.124290] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.948 [2024-04-27 00:10:16.124297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.948 [2024-04-27 00:10:16.127774] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.948 [2024-04-27 00:10:16.136603] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.948 [2024-04-27 00:10:16.137277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.948 [2024-04-27 00:10:16.137661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.948 [2024-04-27 00:10:16.137674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.948 [2024-04-27 00:10:16.137683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.948 [2024-04-27 00:10:16.137924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.948 [2024-04-27 00:10:16.138143] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.948 [2024-04-27 00:10:16.138151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.948 [2024-04-27 00:10:16.138158] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.948 [2024-04-27 00:10:16.141637] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.948 [2024-04-27 00:10:16.150452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.948 [2024-04-27 00:10:16.151052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.948 [2024-04-27 00:10:16.151406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.948 [2024-04-27 00:10:16.151423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:45.948 [2024-04-27 00:10:16.151433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:45.948 [2024-04-27 00:10:16.151667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:45.948 [2024-04-27 00:10:16.151892] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.948 [2024-04-27 00:10:16.151902] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.948 [2024-04-27 00:10:16.151910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.948 [2024-04-27 00:10:16.155386] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.948 [2024-04-27 00:10:16.164211] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.210 [2024-04-27 00:10:16.164765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.210 [2024-04-27 00:10:16.165121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.210 [2024-04-27 00:10:16.165133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.210 [2024-04-27 00:10:16.165141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.210 [2024-04-27 00:10:16.165356] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.210 [2024-04-27 00:10:16.165570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.210 [2024-04-27 00:10:16.165578] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.210 [2024-04-27 00:10:16.165586] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.210 [2024-04-27 00:10:16.169068] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.210 [2024-04-27 00:10:16.178097] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.210 [2024-04-27 00:10:16.178745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.210 [2024-04-27 00:10:16.179096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.210 [2024-04-27 00:10:16.179109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.210 [2024-04-27 00:10:16.179119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.210 [2024-04-27 00:10:16.179352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.210 [2024-04-27 00:10:16.179570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.210 [2024-04-27 00:10:16.179579] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.210 [2024-04-27 00:10:16.179586] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.210 [2024-04-27 00:10:16.183075] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.210 [2024-04-27 00:10:16.191902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.210 [2024-04-27 00:10:16.192463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.210 [2024-04-27 00:10:16.192803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.210 [2024-04-27 00:10:16.192813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.210 [2024-04-27 00:10:16.192824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.210 [2024-04-27 00:10:16.193047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.210 [2024-04-27 00:10:16.193262] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.211 [2024-04-27 00:10:16.193270] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.211 [2024-04-27 00:10:16.193277] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.211 [2024-04-27 00:10:16.196752] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.211 [2024-04-27 00:10:16.205792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.211 [2024-04-27 00:10:16.206453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.206802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.206815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.211 [2024-04-27 00:10:16.206824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.211 [2024-04-27 00:10:16.207075] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.211 [2024-04-27 00:10:16.207294] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.211 [2024-04-27 00:10:16.207302] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.211 [2024-04-27 00:10:16.207310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.211 [2024-04-27 00:10:16.210793] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.211 [2024-04-27 00:10:16.219615] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.211 [2024-04-27 00:10:16.220160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.220481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.220490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.211 [2024-04-27 00:10:16.220498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.211 [2024-04-27 00:10:16.220713] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.211 [2024-04-27 00:10:16.220934] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.211 [2024-04-27 00:10:16.220943] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.211 [2024-04-27 00:10:16.220950] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.211 [2024-04-27 00:10:16.224430] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.211 [2024-04-27 00:10:16.233464] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.211 [2024-04-27 00:10:16.234137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.234488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.234501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.211 [2024-04-27 00:10:16.234510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.211 [2024-04-27 00:10:16.234748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.211 [2024-04-27 00:10:16.234974] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.211 [2024-04-27 00:10:16.234983] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.211 [2024-04-27 00:10:16.234990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.211 [2024-04-27 00:10:16.238480] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.211 [2024-04-27 00:10:16.247305] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.211 [2024-04-27 00:10:16.248016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.248239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.248253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.211 [2024-04-27 00:10:16.248263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.211 [2024-04-27 00:10:16.248498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.211 [2024-04-27 00:10:16.248715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.211 [2024-04-27 00:10:16.248723] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.211 [2024-04-27 00:10:16.248731] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.211 [2024-04-27 00:10:16.252218] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.211 [2024-04-27 00:10:16.261046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.211 [2024-04-27 00:10:16.261641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.261960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.261971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.211 [2024-04-27 00:10:16.261979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.211 [2024-04-27 00:10:16.262195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.211 [2024-04-27 00:10:16.262409] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.211 [2024-04-27 00:10:16.262417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.211 [2024-04-27 00:10:16.262424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.211 [2024-04-27 00:10:16.265950] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.211 [2024-04-27 00:10:16.274776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.211 [2024-04-27 00:10:16.275368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.275727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.275736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.211 [2024-04-27 00:10:16.275744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.211 [2024-04-27 00:10:16.275964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.211 [2024-04-27 00:10:16.276183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.211 [2024-04-27 00:10:16.276190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.211 [2024-04-27 00:10:16.276197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.211 [2024-04-27 00:10:16.279670] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.211 [2024-04-27 00:10:16.288695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.211 [2024-04-27 00:10:16.289235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.289592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.289605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.211 [2024-04-27 00:10:16.289615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.211 [2024-04-27 00:10:16.289857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.211 [2024-04-27 00:10:16.290076] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.211 [2024-04-27 00:10:16.290085] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.211 [2024-04-27 00:10:16.290092] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.211 [2024-04-27 00:10:16.293575] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.211 [2024-04-27 00:10:16.302604] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.211 [2024-04-27 00:10:16.303283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.303479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.211 [2024-04-27 00:10:16.303493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.211 [2024-04-27 00:10:16.303503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.211 [2024-04-27 00:10:16.303736] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.211 [2024-04-27 00:10:16.303964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.211 [2024-04-27 00:10:16.303973] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.211 [2024-04-27 00:10:16.303980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.212 [2024-04-27 00:10:16.307475] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.212 [2024-04-27 00:10:16.316507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.212 [2024-04-27 00:10:16.317167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.317517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.317530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.212 [2024-04-27 00:10:16.317539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.212 [2024-04-27 00:10:16.317773] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.212 [2024-04-27 00:10:16.317998] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.212 [2024-04-27 00:10:16.318008] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.212 [2024-04-27 00:10:16.318019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.212 [2024-04-27 00:10:16.321499] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.212 [2024-04-27 00:10:16.330313] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.212 [2024-04-27 00:10:16.330913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.331251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.331260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.212 [2024-04-27 00:10:16.331268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.212 [2024-04-27 00:10:16.331483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.212 [2024-04-27 00:10:16.331697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.212 [2024-04-27 00:10:16.331705] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.212 [2024-04-27 00:10:16.331712] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.212 [2024-04-27 00:10:16.335197] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.212 [2024-04-27 00:10:16.344031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.212 [2024-04-27 00:10:16.344622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.344852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.344862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.212 [2024-04-27 00:10:16.344869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.212 [2024-04-27 00:10:16.345084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.212 [2024-04-27 00:10:16.345298] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.212 [2024-04-27 00:10:16.345305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.212 [2024-04-27 00:10:16.345312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.212 [2024-04-27 00:10:16.348785] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.212 [2024-04-27 00:10:16.357798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.212 [2024-04-27 00:10:16.358357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.358710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.358723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.212 [2024-04-27 00:10:16.358733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.212 [2024-04-27 00:10:16.358979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.212 [2024-04-27 00:10:16.359199] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.212 [2024-04-27 00:10:16.359207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.212 [2024-04-27 00:10:16.359215] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.212 [2024-04-27 00:10:16.362699] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.212 [2024-04-27 00:10:16.371526] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.212 [2024-04-27 00:10:16.372228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.372635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.372648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.212 [2024-04-27 00:10:16.372658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.212 [2024-04-27 00:10:16.372899] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.212 [2024-04-27 00:10:16.373118] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.212 [2024-04-27 00:10:16.373126] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.212 [2024-04-27 00:10:16.373134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.212 [2024-04-27 00:10:16.376611] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.212 [2024-04-27 00:10:16.385428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.212 [2024-04-27 00:10:16.385964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.386348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.386361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.212 [2024-04-27 00:10:16.386371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.212 [2024-04-27 00:10:16.386605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.212 [2024-04-27 00:10:16.386822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.212 [2024-04-27 00:10:16.386831] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.212 [2024-04-27 00:10:16.386844] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.212 [2024-04-27 00:10:16.390323] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.212 [2024-04-27 00:10:16.399343] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.212 [2024-04-27 00:10:16.399900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.400250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.400261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.212 [2024-04-27 00:10:16.400268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.212 [2024-04-27 00:10:16.400483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.212 [2024-04-27 00:10:16.400698] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.212 [2024-04-27 00:10:16.400706] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.212 [2024-04-27 00:10:16.400712] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.212 [2024-04-27 00:10:16.404196] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.212 [2024-04-27 00:10:16.413240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.212 [2024-04-27 00:10:16.413794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.414121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.212 [2024-04-27 00:10:16.414132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.212 [2024-04-27 00:10:16.414139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.212 [2024-04-27 00:10:16.414354] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.212 [2024-04-27 00:10:16.414568] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.212 [2024-04-27 00:10:16.414576] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.212 [2024-04-27 00:10:16.414583] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.212 [2024-04-27 00:10:16.418064] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.212 [2024-04-27 00:10:16.427176] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.213 [2024-04-27 00:10:16.427819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.213 [2024-04-27 00:10:16.428191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.213 [2024-04-27 00:10:16.428204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.213 [2024-04-27 00:10:16.428213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.474 [2024-04-27 00:10:16.428447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.474 [2024-04-27 00:10:16.428666] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.474 [2024-04-27 00:10:16.428675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.474 [2024-04-27 00:10:16.428682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.474 [2024-04-27 00:10:16.432173] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.474 [2024-04-27 00:10:16.441018] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.474 [2024-04-27 00:10:16.441619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.474 [2024-04-27 00:10:16.441943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.474 [2024-04-27 00:10:16.441954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.474 [2024-04-27 00:10:16.441961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.474 [2024-04-27 00:10:16.442176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.474 [2024-04-27 00:10:16.442391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.474 [2024-04-27 00:10:16.442398] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.474 [2024-04-27 00:10:16.442405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.474 [2024-04-27 00:10:16.445888] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.474 [2024-04-27 00:10:16.454949] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.474 [2024-04-27 00:10:16.455591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.474 [2024-04-27 00:10:16.455859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.474 [2024-04-27 00:10:16.455874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.474 [2024-04-27 00:10:16.455883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.474 [2024-04-27 00:10:16.456116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.474 [2024-04-27 00:10:16.456334] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.474 [2024-04-27 00:10:16.456342] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.474 [2024-04-27 00:10:16.456349] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.474 [2024-04-27 00:10:16.459830] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.474 [2024-04-27 00:10:16.468867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.474 [2024-04-27 00:10:16.469525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.469793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.469805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.475 [2024-04-27 00:10:16.469815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.475 [2024-04-27 00:10:16.470057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.475 [2024-04-27 00:10:16.470276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.475 [2024-04-27 00:10:16.470284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.475 [2024-04-27 00:10:16.470292] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.475 [2024-04-27 00:10:16.473772] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.475 [2024-04-27 00:10:16.482597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.475 [2024-04-27 00:10:16.483265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.483654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.483666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.475 [2024-04-27 00:10:16.483676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.475 [2024-04-27 00:10:16.483917] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.475 [2024-04-27 00:10:16.484136] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.475 [2024-04-27 00:10:16.484144] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.475 [2024-04-27 00:10:16.484152] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.475 [2024-04-27 00:10:16.487632] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.475 [2024-04-27 00:10:16.496457] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.475 [2024-04-27 00:10:16.497149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.497500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.497517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.475 [2024-04-27 00:10:16.497526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.475 [2024-04-27 00:10:16.497760] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.475 [2024-04-27 00:10:16.497985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.475 [2024-04-27 00:10:16.497995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.475 [2024-04-27 00:10:16.498003] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.475 [2024-04-27 00:10:16.501480] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.475 [2024-04-27 00:10:16.510302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.475 [2024-04-27 00:10:16.510941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.511292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.511305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.475 [2024-04-27 00:10:16.511315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.475 [2024-04-27 00:10:16.511548] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.475 [2024-04-27 00:10:16.511766] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.475 [2024-04-27 00:10:16.511774] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.475 [2024-04-27 00:10:16.511781] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.475 [2024-04-27 00:10:16.515271] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.475 [2024-04-27 00:10:16.524090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.475 [2024-04-27 00:10:16.524780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.525179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.525193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.475 [2024-04-27 00:10:16.525202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.475 [2024-04-27 00:10:16.525436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.475 [2024-04-27 00:10:16.525654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.475 [2024-04-27 00:10:16.525662] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.475 [2024-04-27 00:10:16.525669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.475 [2024-04-27 00:10:16.529153] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.475 [2024-04-27 00:10:16.537981] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.475 [2024-04-27 00:10:16.538551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.538755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.538767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.475 [2024-04-27 00:10:16.538778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.475 [2024-04-27 00:10:16.539000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.475 [2024-04-27 00:10:16.539216] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.475 [2024-04-27 00:10:16.539224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.475 [2024-04-27 00:10:16.539230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.475 [2024-04-27 00:10:16.542708] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.475 [2024-04-27 00:10:16.551727] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.475 [2024-04-27 00:10:16.552403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.552756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.552769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.475 [2024-04-27 00:10:16.552778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.475 [2024-04-27 00:10:16.553019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.475 [2024-04-27 00:10:16.553237] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.475 [2024-04-27 00:10:16.553246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.475 [2024-04-27 00:10:16.553253] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.475 [2024-04-27 00:10:16.556730] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.475 [2024-04-27 00:10:16.565543] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.475 [2024-04-27 00:10:16.566215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.566567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.566579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.475 [2024-04-27 00:10:16.566588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.475 [2024-04-27 00:10:16.566822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.475 [2024-04-27 00:10:16.567046] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.475 [2024-04-27 00:10:16.567054] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.475 [2024-04-27 00:10:16.567062] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.475 [2024-04-27 00:10:16.570540] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.475 [2024-04-27 00:10:16.579356] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.475 [2024-04-27 00:10:16.579802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.580128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.475 [2024-04-27 00:10:16.580139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.475 [2024-04-27 00:10:16.580147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.475 [2024-04-27 00:10:16.580367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.475 [2024-04-27 00:10:16.580581] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.475 [2024-04-27 00:10:16.580588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.475 [2024-04-27 00:10:16.580595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.476 [2024-04-27 00:10:16.584072] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.476 [2024-04-27 00:10:16.593087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.476 [2024-04-27 00:10:16.593635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.593957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.593967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.476 [2024-04-27 00:10:16.593974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.476 [2024-04-27 00:10:16.594189] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.476 [2024-04-27 00:10:16.594403] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.476 [2024-04-27 00:10:16.594411] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.476 [2024-04-27 00:10:16.594418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.476 [2024-04-27 00:10:16.597896] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.476 [2024-04-27 00:10:16.606925] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.476 [2024-04-27 00:10:16.607579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.607931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.607944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.476 [2024-04-27 00:10:16.607953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.476 [2024-04-27 00:10:16.608187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.476 [2024-04-27 00:10:16.608405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.476 [2024-04-27 00:10:16.608414] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.476 [2024-04-27 00:10:16.608421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.476 [2024-04-27 00:10:16.611911] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.476 [2024-04-27 00:10:16.620741] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.476 [2024-04-27 00:10:16.621379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.621735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.621748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.476 [2024-04-27 00:10:16.621757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.476 [2024-04-27 00:10:16.621998] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.476 [2024-04-27 00:10:16.622221] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.476 [2024-04-27 00:10:16.622230] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.476 [2024-04-27 00:10:16.622238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.476 [2024-04-27 00:10:16.625721] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.476 [2024-04-27 00:10:16.634551] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.476 [2024-04-27 00:10:16.635847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.636204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.636221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.476 [2024-04-27 00:10:16.636232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.476 [2024-04-27 00:10:16.636455] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.476 [2024-04-27 00:10:16.636671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.476 [2024-04-27 00:10:16.636679] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.476 [2024-04-27 00:10:16.636686] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.476 [2024-04-27 00:10:16.640176] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.476 [2024-04-27 00:10:16.648407] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.476 [2024-04-27 00:10:16.648845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.649198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.649207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.476 [2024-04-27 00:10:16.649215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.476 [2024-04-27 00:10:16.649431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.476 [2024-04-27 00:10:16.649646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.476 [2024-04-27 00:10:16.649664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.476 [2024-04-27 00:10:16.649671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.476 [2024-04-27 00:10:16.653154] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.476 [2024-04-27 00:10:16.662180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.476 [2024-04-27 00:10:16.662775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.663148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.663159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.476 [2024-04-27 00:10:16.663166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.476 [2024-04-27 00:10:16.663381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.476 [2024-04-27 00:10:16.663595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.476 [2024-04-27 00:10:16.663607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.476 [2024-04-27 00:10:16.663614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.476 [2024-04-27 00:10:16.667094] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.476 [2024-04-27 00:10:16.675913] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.476 [2024-04-27 00:10:16.676468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.676798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.676807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.476 [2024-04-27 00:10:16.676814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.476 [2024-04-27 00:10:16.677035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.476 [2024-04-27 00:10:16.677250] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.476 [2024-04-27 00:10:16.677257] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.476 [2024-04-27 00:10:16.677264] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.476 [2024-04-27 00:10:16.680740] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.476 [2024-04-27 00:10:16.689760] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.476 [2024-04-27 00:10:16.690331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.690687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.476 [2024-04-27 00:10:16.690696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.476 [2024-04-27 00:10:16.690703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.476 [2024-04-27 00:10:16.690923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.476 [2024-04-27 00:10:16.691137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.476 [2024-04-27 00:10:16.691145] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.476 [2024-04-27 00:10:16.691151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.738 [2024-04-27 00:10:16.694625] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.738 [2024-04-27 00:10:16.703643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.738 [2024-04-27 00:10:16.704223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.738 [2024-04-27 00:10:16.704562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.738 [2024-04-27 00:10:16.704571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.738 [2024-04-27 00:10:16.704579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.738 [2024-04-27 00:10:16.704793] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.738 [2024-04-27 00:10:16.705014] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.738 [2024-04-27 00:10:16.705030] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.738 [2024-04-27 00:10:16.705041] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.738 [2024-04-27 00:10:16.708526] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.738 [2024-04-27 00:10:16.717545] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.738 [2024-04-27 00:10:16.718113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.738 [2024-04-27 00:10:16.718488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.718497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-04-27 00:10:16.718504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.739 [2024-04-27 00:10:16.718719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.739 [2024-04-27 00:10:16.718937] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.739 [2024-04-27 00:10:16.718945] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.739 [2024-04-27 00:10:16.718952] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.739 [2024-04-27 00:10:16.722430] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-04-27 00:10:16.731452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.739 [2024-04-27 00:10:16.731847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.732039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.732048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-04-27 00:10:16.732055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.739 [2024-04-27 00:10:16.732270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.739 [2024-04-27 00:10:16.732484] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.739 [2024-04-27 00:10:16.732492] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.739 [2024-04-27 00:10:16.732499] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.739 [2024-04-27 00:10:16.735983] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-04-27 00:10:16.745210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.739 [2024-04-27 00:10:16.745748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.746108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.746118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-04-27 00:10:16.746126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.739 [2024-04-27 00:10:16.746341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.739 [2024-04-27 00:10:16.746555] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.739 [2024-04-27 00:10:16.746563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.739 [2024-04-27 00:10:16.746570] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.739 [2024-04-27 00:10:16.750053] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-04-27 00:10:16.759076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.739 [2024-04-27 00:10:16.759730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.760154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.760168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-04-27 00:10:16.760177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.739 [2024-04-27 00:10:16.760411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.739 [2024-04-27 00:10:16.760630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.739 [2024-04-27 00:10:16.760638] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.739 [2024-04-27 00:10:16.760645] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.739 [2024-04-27 00:10:16.764127] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-04-27 00:10:16.772937] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.739 [2024-04-27 00:10:16.773432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.773761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.773770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-04-27 00:10:16.773778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.739 [2024-04-27 00:10:16.773998] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.739 [2024-04-27 00:10:16.774213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.739 [2024-04-27 00:10:16.774221] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.739 [2024-04-27 00:10:16.774228] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.739 [2024-04-27 00:10:16.777701] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-04-27 00:10:16.786717] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.739 [2024-04-27 00:10:16.787359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.787710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.787723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-04-27 00:10:16.787732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.739 [2024-04-27 00:10:16.787972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.739 [2024-04-27 00:10:16.788190] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.739 [2024-04-27 00:10:16.788199] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.739 [2024-04-27 00:10:16.788207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.739 [2024-04-27 00:10:16.791685] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-04-27 00:10:16.800493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.739 [2024-04-27 00:10:16.801072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.801438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.801448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-04-27 00:10:16.801455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.739 [2024-04-27 00:10:16.801671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.739 [2024-04-27 00:10:16.801889] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.739 [2024-04-27 00:10:16.801898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.739 [2024-04-27 00:10:16.801904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.739 [2024-04-27 00:10:16.805379] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-04-27 00:10:16.814402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.739 [2024-04-27 00:10:16.814965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.815243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.815255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-04-27 00:10:16.815264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.739 [2024-04-27 00:10:16.815498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.739 [2024-04-27 00:10:16.815716] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.739 [2024-04-27 00:10:16.815724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.739 [2024-04-27 00:10:16.815731] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.739 [2024-04-27 00:10:16.819218] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-04-27 00:10:16.828231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.739 [2024-04-27 00:10:16.828782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.829122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-04-27 00:10:16.829132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-04-27 00:10:16.829140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.739 [2024-04-27 00:10:16.829355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.739 [2024-04-27 00:10:16.829570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.739 [2024-04-27 00:10:16.829577] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.739 [2024-04-27 00:10:16.829584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.739 [2024-04-27 00:10:16.833062] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 [2024-04-27 00:10:16.842138] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.740 [2024-04-27 00:10:16.842820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.843242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.843254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-04-27 00:10:16.843263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.740 [2024-04-27 00:10:16.843497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.740 [2024-04-27 00:10:16.843715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.740 [2024-04-27 00:10:16.843724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.740 [2024-04-27 00:10:16.843731] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.740 [2024-04-27 00:10:16.847215] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 [2024-04-27 00:10:16.856027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.740 [2024-04-27 00:10:16.856674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.856968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.856982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-04-27 00:10:16.856991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.740 [2024-04-27 00:10:16.857225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.740 [2024-04-27 00:10:16.857443] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.740 [2024-04-27 00:10:16.857451] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.740 [2024-04-27 00:10:16.857458] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.740 [2024-04-27 00:10:16.860941] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 [2024-04-27 00:10:16.869754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.740 [2024-04-27 00:10:16.870387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.870734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.870747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-04-27 00:10:16.870756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.740 [2024-04-27 00:10:16.870999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.740 [2024-04-27 00:10:16.871218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.740 [2024-04-27 00:10:16.871226] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.740 [2024-04-27 00:10:16.871234] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.740 [2024-04-27 00:10:16.874709] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 [2024-04-27 00:10:16.883515] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.740 [2024-04-27 00:10:16.884180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.884527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.884544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-04-27 00:10:16.884554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.740 [2024-04-27 00:10:16.884788] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.740 [2024-04-27 00:10:16.885014] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.740 [2024-04-27 00:10:16.885023] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.740 [2024-04-27 00:10:16.885030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.740 [2024-04-27 00:10:16.888508] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 [2024-04-27 00:10:16.897320] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.740 [2024-04-27 00:10:16.898066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.898328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.898347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-04-27 00:10:16.898356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.740 [2024-04-27 00:10:16.898590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.740 [2024-04-27 00:10:16.898808] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.740 [2024-04-27 00:10:16.898816] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.740 [2024-04-27 00:10:16.898823] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.740 [2024-04-27 00:10:16.902309] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 [2024-04-27 00:10:16.911130] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.740 [2024-04-27 00:10:16.911700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.911950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.911964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-04-27 00:10:16.911973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.740 [2024-04-27 00:10:16.912207] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.740 [2024-04-27 00:10:16.912425] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.740 [2024-04-27 00:10:16.912433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.740 [2024-04-27 00:10:16.912440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.740 [2024-04-27 00:10:16.915923] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 [2024-04-27 00:10:16.924944] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.740 [2024-04-27 00:10:16.925590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.925944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.925958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-04-27 00:10:16.925971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.740 [2024-04-27 00:10:16.926206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.740 [2024-04-27 00:10:16.926424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.740 [2024-04-27 00:10:16.926432] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.740 [2024-04-27 00:10:16.926439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.740 [2024-04-27 00:10:16.929922] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 [2024-04-27 00:10:16.938740] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.740 [2024-04-27 00:10:16.939406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.939824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.939845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-04-27 00:10:16.939855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.740 [2024-04-27 00:10:16.940089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.740 [2024-04-27 00:10:16.940306] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.740 [2024-04-27 00:10:16.940314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.740 [2024-04-27 00:10:16.940321] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.740 [2024-04-27 00:10:16.943797] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 [2024-04-27 00:10:16.952607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:46.740 [2024-04-27 00:10:16.953256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.953605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-04-27 00:10:16.953618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-04-27 00:10:16.953627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:46.741 [2024-04-27 00:10:16.953870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:46.741 [2024-04-27 00:10:16.954088] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.741 [2024-04-27 00:10:16.954097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.741 [2024-04-27 00:10:16.954105] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.003 [2024-04-27 00:10:16.957581] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.003 [2024-04-27 00:10:16.966395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.003 [2024-04-27 00:10:16.967092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:16.967438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:16.967450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.003 [2024-04-27 00:10:16.967460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.003 [2024-04-27 00:10:16.967698] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.003 [2024-04-27 00:10:16.967924] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.003 [2024-04-27 00:10:16.967934] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.003 [2024-04-27 00:10:16.967941] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.003 [2024-04-27 00:10:16.971419] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.003 [2024-04-27 00:10:16.980226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.003 [2024-04-27 00:10:16.980791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:16.981111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:16.981122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.003 [2024-04-27 00:10:16.981130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.003 [2024-04-27 00:10:16.981345] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.003 [2024-04-27 00:10:16.981559] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.003 [2024-04-27 00:10:16.981568] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.003 [2024-04-27 00:10:16.981574] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.003 [2024-04-27 00:10:16.985049] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.003 [2024-04-27 00:10:16.994060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.003 [2024-04-27 00:10:16.994606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:16.994926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:16.994936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.003 [2024-04-27 00:10:16.994943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.003 [2024-04-27 00:10:16.995158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.003 [2024-04-27 00:10:16.995372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.003 [2024-04-27 00:10:16.995380] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.003 [2024-04-27 00:10:16.995386] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.003 [2024-04-27 00:10:16.998859] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.003 [2024-04-27 00:10:17.007877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.003 [2024-04-27 00:10:17.008525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:17.008881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:17.008895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.003 [2024-04-27 00:10:17.008905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.003 [2024-04-27 00:10:17.009138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.003 [2024-04-27 00:10:17.009360] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.003 [2024-04-27 00:10:17.009369] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.003 [2024-04-27 00:10:17.009376] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.003 [2024-04-27 00:10:17.012859] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.003 [2024-04-27 00:10:17.021668] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.003 [2024-04-27 00:10:17.022327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:17.022675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:17.022688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.003 [2024-04-27 00:10:17.022697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.003 [2024-04-27 00:10:17.022939] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.003 [2024-04-27 00:10:17.023158] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.003 [2024-04-27 00:10:17.023166] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.003 [2024-04-27 00:10:17.023173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.003 [2024-04-27 00:10:17.026653] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.003 [2024-04-27 00:10:17.035466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.003 [2024-04-27 00:10:17.036136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:17.036484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:17.036497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.003 [2024-04-27 00:10:17.036507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.003 [2024-04-27 00:10:17.036740] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.003 [2024-04-27 00:10:17.036966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.003 [2024-04-27 00:10:17.036975] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.003 [2024-04-27 00:10:17.036982] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.003 [2024-04-27 00:10:17.040460] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.003 [2024-04-27 00:10:17.049266] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.003 [2024-04-27 00:10:17.049816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:17.050178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:17.050191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.003 [2024-04-27 00:10:17.050200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.003 [2024-04-27 00:10:17.050434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.003 [2024-04-27 00:10:17.050651] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.003 [2024-04-27 00:10:17.050659] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.003 [2024-04-27 00:10:17.050671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.003 [2024-04-27 00:10:17.054156] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.003 [2024-04-27 00:10:17.063169] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.003 [2024-04-27 00:10:17.063731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:17.063961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.003 [2024-04-27 00:10:17.063972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.003 [2024-04-27 00:10:17.063980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.004 [2024-04-27 00:10:17.064195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.004 [2024-04-27 00:10:17.064410] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.004 [2024-04-27 00:10:17.064417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.004 [2024-04-27 00:10:17.064424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.004 [2024-04-27 00:10:17.067900] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.004 [2024-04-27 00:10:17.076907] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.004 [2024-04-27 00:10:17.077425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.077766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.077776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.004 [2024-04-27 00:10:17.077783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.004 [2024-04-27 00:10:17.078004] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.004 [2024-04-27 00:10:17.078218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.004 [2024-04-27 00:10:17.078226] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.004 [2024-04-27 00:10:17.078233] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.004 [2024-04-27 00:10:17.081704] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.004 [2024-04-27 00:10:17.090710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.004 [2024-04-27 00:10:17.091337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.091765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.091777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.004 [2024-04-27 00:10:17.091786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.004 [2024-04-27 00:10:17.092029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.004 [2024-04-27 00:10:17.092248] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.004 [2024-04-27 00:10:17.092256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.004 [2024-04-27 00:10:17.092263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.004 [2024-04-27 00:10:17.095746] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.004 [2024-04-27 00:10:17.104557] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.004 [2024-04-27 00:10:17.105240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.105595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.105608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.004 [2024-04-27 00:10:17.105617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.004 [2024-04-27 00:10:17.105859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.004 [2024-04-27 00:10:17.106078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.004 [2024-04-27 00:10:17.106087] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.004 [2024-04-27 00:10:17.106094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.004 [2024-04-27 00:10:17.109582] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.004 [2024-04-27 00:10:17.118391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.004 [2024-04-27 00:10:17.118976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.119226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.119240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.004 [2024-04-27 00:10:17.119250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.004 [2024-04-27 00:10:17.119484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.004 [2024-04-27 00:10:17.119702] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.004 [2024-04-27 00:10:17.119710] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.004 [2024-04-27 00:10:17.119717] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.004 [2024-04-27 00:10:17.123203] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.004 [2024-04-27 00:10:17.132219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.004 [2024-04-27 00:10:17.132882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.133239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.133252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.004 [2024-04-27 00:10:17.133261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.004 [2024-04-27 00:10:17.133501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.004 [2024-04-27 00:10:17.133719] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.004 [2024-04-27 00:10:17.133728] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.004 [2024-04-27 00:10:17.133736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.004 [2024-04-27 00:10:17.137226] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.004 [2024-04-27 00:10:17.146041] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.004 [2024-04-27 00:10:17.146593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.146999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.147013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.004 [2024-04-27 00:10:17.147023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.004 [2024-04-27 00:10:17.147257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.004 [2024-04-27 00:10:17.147475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.004 [2024-04-27 00:10:17.147483] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.004 [2024-04-27 00:10:17.147491] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.004 [2024-04-27 00:10:17.150976] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.004 [2024-04-27 00:10:17.159784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.004 [2024-04-27 00:10:17.160480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.160844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.160857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.004 [2024-04-27 00:10:17.160866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.004 [2024-04-27 00:10:17.161100] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.004 [2024-04-27 00:10:17.161318] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.004 [2024-04-27 00:10:17.161327] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.004 [2024-04-27 00:10:17.161335] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.004 [2024-04-27 00:10:17.164814] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.004 [2024-04-27 00:10:17.173627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.004 [2024-04-27 00:10:17.174221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.174544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.004 [2024-04-27 00:10:17.174555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.004 [2024-04-27 00:10:17.174563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.004 [2024-04-27 00:10:17.174778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.004 [2024-04-27 00:10:17.174998] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.004 [2024-04-27 00:10:17.175006] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.004 [2024-04-27 00:10:17.175013] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.004 [2024-04-27 00:10:17.178489] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.004 [2024-04-27 00:10:17.187498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.004 [2024-04-27 00:10:17.188180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.005 [2024-04-27 00:10:17.188409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.005 [2024-04-27 00:10:17.188423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.005 [2024-04-27 00:10:17.188432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.005 [2024-04-27 00:10:17.188666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.005 [2024-04-27 00:10:17.188893] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.005 [2024-04-27 00:10:17.188901] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.005 [2024-04-27 00:10:17.188909] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.005 [2024-04-27 00:10:17.192389] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.005 [2024-04-27 00:10:17.201607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.005 [2024-04-27 00:10:17.202269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.005 [2024-04-27 00:10:17.202590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.005 [2024-04-27 00:10:17.202603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.005 [2024-04-27 00:10:17.202612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.005 [2024-04-27 00:10:17.202854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.005 [2024-04-27 00:10:17.203073] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.005 [2024-04-27 00:10:17.203081] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.005 [2024-04-27 00:10:17.203089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.005 [2024-04-27 00:10:17.206567] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.005 [2024-04-27 00:10:17.215385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.005 [2024-04-27 00:10:17.215954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.005 [2024-04-27 00:10:17.216350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.005 [2024-04-27 00:10:17.216363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.005 [2024-04-27 00:10:17.216372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.005 [2024-04-27 00:10:17.216605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.005 [2024-04-27 00:10:17.216822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.005 [2024-04-27 00:10:17.216830] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.005 [2024-04-27 00:10:17.216847] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.005 [2024-04-27 00:10:17.220327] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.267 [2024-04-27 00:10:17.229137] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.268 [2024-04-27 00:10:17.229827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.230204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.230216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.268 [2024-04-27 00:10:17.230226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.268 [2024-04-27 00:10:17.230459] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.268 [2024-04-27 00:10:17.230677] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.268 [2024-04-27 00:10:17.230685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.268 [2024-04-27 00:10:17.230693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.268 [2024-04-27 00:10:17.234183] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.268 [2024-04-27 00:10:17.243003] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.268 [2024-04-27 00:10:17.243659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.244026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.244040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.268 [2024-04-27 00:10:17.244049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.268 [2024-04-27 00:10:17.244283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.268 [2024-04-27 00:10:17.244500] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.268 [2024-04-27 00:10:17.244510] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.268 [2024-04-27 00:10:17.244517] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.268 [2024-04-27 00:10:17.247999] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.268 [2024-04-27 00:10:17.256807] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.268 [2024-04-27 00:10:17.257490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.257849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.257862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.268 [2024-04-27 00:10:17.257871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.268 [2024-04-27 00:10:17.258105] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.268 [2024-04-27 00:10:17.258322] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.268 [2024-04-27 00:10:17.258332] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.268 [2024-04-27 00:10:17.258339] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.268 [2024-04-27 00:10:17.261817] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.268 [2024-04-27 00:10:17.270628] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.268 [2024-04-27 00:10:17.271301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.271650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.271663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.268 [2024-04-27 00:10:17.271680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.268 [2024-04-27 00:10:17.271926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.268 [2024-04-27 00:10:17.272145] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.268 [2024-04-27 00:10:17.272153] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.268 [2024-04-27 00:10:17.272161] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.268 [2024-04-27 00:10:17.275636] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.268 [2024-04-27 00:10:17.284440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.268 [2024-04-27 00:10:17.285055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.285272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.285287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.268 [2024-04-27 00:10:17.285296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.268 [2024-04-27 00:10:17.285530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.268 [2024-04-27 00:10:17.285748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.268 [2024-04-27 00:10:17.285756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.268 [2024-04-27 00:10:17.285764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.268 [2024-04-27 00:10:17.289250] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.268 [2024-04-27 00:10:17.298260] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.268 [2024-04-27 00:10:17.298937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.299266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.299279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.268 [2024-04-27 00:10:17.299288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.268 [2024-04-27 00:10:17.299522] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.268 [2024-04-27 00:10:17.299739] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.268 [2024-04-27 00:10:17.299748] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.268 [2024-04-27 00:10:17.299756] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.268 [2024-04-27 00:10:17.303241] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.268 [2024-04-27 00:10:17.312059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.268 [2024-04-27 00:10:17.312737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.313079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.313093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.268 [2024-04-27 00:10:17.313102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.268 [2024-04-27 00:10:17.313340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.268 [2024-04-27 00:10:17.313558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.268 [2024-04-27 00:10:17.313572] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.268 [2024-04-27 00:10:17.313579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.268 [2024-04-27 00:10:17.317060] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.268 [2024-04-27 00:10:17.325873] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.268 [2024-04-27 00:10:17.326543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.326900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.268 [2024-04-27 00:10:17.326914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.268 [2024-04-27 00:10:17.326924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.268 [2024-04-27 00:10:17.327157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.268 [2024-04-27 00:10:17.327375] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.268 [2024-04-27 00:10:17.327384] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.268 [2024-04-27 00:10:17.327391] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.268 [2024-04-27 00:10:17.330874] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.269 [2024-04-27 00:10:17.339694] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.269 [2024-04-27 00:10:17.340360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.340708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.340721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.269 [2024-04-27 00:10:17.340730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.269 [2024-04-27 00:10:17.340972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.269 [2024-04-27 00:10:17.341190] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.269 [2024-04-27 00:10:17.341199] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.269 [2024-04-27 00:10:17.341207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.269 [2024-04-27 00:10:17.344684] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.269 [2024-04-27 00:10:17.353492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.269 [2024-04-27 00:10:17.354174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.354526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.354539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.269 [2024-04-27 00:10:17.354549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.269 [2024-04-27 00:10:17.354782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.269 [2024-04-27 00:10:17.355011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.269 [2024-04-27 00:10:17.355020] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.269 [2024-04-27 00:10:17.355027] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.269 [2024-04-27 00:10:17.358505] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.269 [2024-04-27 00:10:17.367316] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.269 [2024-04-27 00:10:17.367915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.368240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.368250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.269 [2024-04-27 00:10:17.368257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.269 [2024-04-27 00:10:17.368472] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.269 [2024-04-27 00:10:17.368687] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.269 [2024-04-27 00:10:17.368694] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.269 [2024-04-27 00:10:17.368701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.269 [2024-04-27 00:10:17.372179] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.269 [2024-04-27 00:10:17.381190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.269 [2024-04-27 00:10:17.381727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.382049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.382060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.269 [2024-04-27 00:10:17.382067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.269 [2024-04-27 00:10:17.382281] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.269 [2024-04-27 00:10:17.382496] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.269 [2024-04-27 00:10:17.382504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.269 [2024-04-27 00:10:17.382510] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.269 [2024-04-27 00:10:17.385988] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.269 [2024-04-27 00:10:17.394996] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.269 [2024-04-27 00:10:17.395620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.395975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.395990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.269 [2024-04-27 00:10:17.395999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.269 [2024-04-27 00:10:17.396233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.269 [2024-04-27 00:10:17.396451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.269 [2024-04-27 00:10:17.396464] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.269 [2024-04-27 00:10:17.396472] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.269 [2024-04-27 00:10:17.399960] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.269 [2024-04-27 00:10:17.408783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.269 [2024-04-27 00:10:17.409442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.409793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.409806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.269 [2024-04-27 00:10:17.409816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.269 [2024-04-27 00:10:17.410057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.269 [2024-04-27 00:10:17.410275] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.269 [2024-04-27 00:10:17.410284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.269 [2024-04-27 00:10:17.410291] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.269 [2024-04-27 00:10:17.413772] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.269 [2024-04-27 00:10:17.422580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.269 [2024-04-27 00:10:17.423067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.423348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.423360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.269 [2024-04-27 00:10:17.423368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.269 [2024-04-27 00:10:17.423584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.269 [2024-04-27 00:10:17.423798] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.269 [2024-04-27 00:10:17.423807] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.269 [2024-04-27 00:10:17.423813] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.269 [2024-04-27 00:10:17.427295] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.269 [2024-04-27 00:10:17.436315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.269 [2024-04-27 00:10:17.436941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.437332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.437344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.269 [2024-04-27 00:10:17.437354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.269 [2024-04-27 00:10:17.437587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.269 [2024-04-27 00:10:17.437805] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.269 [2024-04-27 00:10:17.437813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.269 [2024-04-27 00:10:17.437824] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.269 [2024-04-27 00:10:17.441310] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.269 [2024-04-27 00:10:17.450123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.269 [2024-04-27 00:10:17.450781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.451174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.269 [2024-04-27 00:10:17.451188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.269 [2024-04-27 00:10:17.451198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.269 [2024-04-27 00:10:17.451431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.269 [2024-04-27 00:10:17.451649] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.269 [2024-04-27 00:10:17.451658] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.270 [2024-04-27 00:10:17.451665] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.270 [2024-04-27 00:10:17.455146] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.270 [2024-04-27 00:10:17.463954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.270 [2024-04-27 00:10:17.464636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.270 [2024-04-27 00:10:17.464984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.270 [2024-04-27 00:10:17.464998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.270 [2024-04-27 00:10:17.465007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.270 [2024-04-27 00:10:17.465241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.270 [2024-04-27 00:10:17.465459] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.270 [2024-04-27 00:10:17.465467] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.270 [2024-04-27 00:10:17.465474] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.270 [2024-04-27 00:10:17.468957] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.270 [2024-04-27 00:10:17.477764] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.270 [2024-04-27 00:10:17.478430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.270 [2024-04-27 00:10:17.478790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.270 [2024-04-27 00:10:17.478804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.270 [2024-04-27 00:10:17.478813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.270 [2024-04-27 00:10:17.479060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.270 [2024-04-27 00:10:17.479279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.270 [2024-04-27 00:10:17.479287] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.270 [2024-04-27 00:10:17.479294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.270 [2024-04-27 00:10:17.482859] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.532 [2024-04-27 00:10:17.491672] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.532 [2024-04-27 00:10:17.492333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.532 [2024-04-27 00:10:17.492671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.532 [2024-04-27 00:10:17.492685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.532 [2024-04-27 00:10:17.492694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.532 [2024-04-27 00:10:17.492936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.532 [2024-04-27 00:10:17.493156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.532 [2024-04-27 00:10:17.493164] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.532 [2024-04-27 00:10:17.493171] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.532 [2024-04-27 00:10:17.496647] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.532 [2024-04-27 00:10:17.505454] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.532 [2024-04-27 00:10:17.506137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.532 [2024-04-27 00:10:17.506486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.532 [2024-04-27 00:10:17.506499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.532 [2024-04-27 00:10:17.506508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.532 [2024-04-27 00:10:17.506742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.532 [2024-04-27 00:10:17.506976] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.532 [2024-04-27 00:10:17.506985] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.532 [2024-04-27 00:10:17.506992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.532 [2024-04-27 00:10:17.510471] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.532 [2024-04-27 00:10:17.519278] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.532 [2024-04-27 00:10:17.519942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.532 [2024-04-27 00:10:17.520273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.532 [2024-04-27 00:10:17.520286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.532 [2024-04-27 00:10:17.520295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.532 [2024-04-27 00:10:17.520529] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.532 [2024-04-27 00:10:17.520747] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.532 [2024-04-27 00:10:17.520756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.532 [2024-04-27 00:10:17.520763] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.532 [2024-04-27 00:10:17.524249] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.532 [2024-04-27 00:10:17.533065] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.533 [2024-04-27 00:10:17.533667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.534021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.534035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.533 [2024-04-27 00:10:17.534044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.533 [2024-04-27 00:10:17.534278] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.533 [2024-04-27 00:10:17.534496] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.533 [2024-04-27 00:10:17.534511] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.533 [2024-04-27 00:10:17.534518] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.533 [2024-04-27 00:10:17.538006] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.533 [2024-04-27 00:10:17.546818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.533 [2024-04-27 00:10:17.547438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.547795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.547808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.533 [2024-04-27 00:10:17.547817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.533 [2024-04-27 00:10:17.548060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.533 [2024-04-27 00:10:17.548279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.533 [2024-04-27 00:10:17.548287] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.533 [2024-04-27 00:10:17.548294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.533 [2024-04-27 00:10:17.551772] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.533 [2024-04-27 00:10:17.560578] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.533 [2024-04-27 00:10:17.561118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.561478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.561491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.533 [2024-04-27 00:10:17.561500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.533 [2024-04-27 00:10:17.561734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.533 [2024-04-27 00:10:17.561960] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.533 [2024-04-27 00:10:17.561969] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.533 [2024-04-27 00:10:17.561976] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.533 [2024-04-27 00:10:17.565455] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.533 [2024-04-27 00:10:17.574464] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.533 [2024-04-27 00:10:17.574974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.575217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.575232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.533 [2024-04-27 00:10:17.575241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.533 [2024-04-27 00:10:17.575475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.533 [2024-04-27 00:10:17.575693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.533 [2024-04-27 00:10:17.575701] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.533 [2024-04-27 00:10:17.575708] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.533 [2024-04-27 00:10:17.579195] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.533 [2024-04-27 00:10:17.588211] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.533 [2024-04-27 00:10:17.588904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.589210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.589231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.533 [2024-04-27 00:10:17.589241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.533 [2024-04-27 00:10:17.589475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.533 [2024-04-27 00:10:17.589693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.533 [2024-04-27 00:10:17.589701] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.533 [2024-04-27 00:10:17.589708] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.533 [2024-04-27 00:10:17.593194] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.533 [2024-04-27 00:10:17.602004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.533 [2024-04-27 00:10:17.602606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.602966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.602980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.533 [2024-04-27 00:10:17.602989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.533 [2024-04-27 00:10:17.603224] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.533 [2024-04-27 00:10:17.603441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.533 [2024-04-27 00:10:17.603450] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.533 [2024-04-27 00:10:17.603457] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.533 [2024-04-27 00:10:17.606947] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.533 [2024-04-27 00:10:17.615754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.533 [2024-04-27 00:10:17.616390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.616735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.616752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.533 [2024-04-27 00:10:17.616761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.533 [2024-04-27 00:10:17.617004] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.533 [2024-04-27 00:10:17.617222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.533 [2024-04-27 00:10:17.617231] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.533 [2024-04-27 00:10:17.617238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.533 [2024-04-27 00:10:17.620716] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.533 [2024-04-27 00:10:17.629525] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.533 [2024-04-27 00:10:17.630185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.630534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.630547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.533 [2024-04-27 00:10:17.630556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.533 [2024-04-27 00:10:17.630790] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.533 [2024-04-27 00:10:17.631015] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.533 [2024-04-27 00:10:17.631025] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.533 [2024-04-27 00:10:17.631032] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.533 [2024-04-27 00:10:17.634516] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.533 [2024-04-27 00:10:17.643331] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.533 [2024-04-27 00:10:17.644037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.644402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.533 [2024-04-27 00:10:17.644415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.533 [2024-04-27 00:10:17.644424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.533 [2024-04-27 00:10:17.644658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.533 [2024-04-27 00:10:17.644883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.534 [2024-04-27 00:10:17.644893] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.534 [2024-04-27 00:10:17.644900] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.534 [2024-04-27 00:10:17.648378] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.534 [2024-04-27 00:10:17.657193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.534 [2024-04-27 00:10:17.657862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.658131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.658145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.534 [2024-04-27 00:10:17.658159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.534 [2024-04-27 00:10:17.658393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.534 [2024-04-27 00:10:17.658611] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.534 [2024-04-27 00:10:17.658619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.534 [2024-04-27 00:10:17.658626] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.534 [2024-04-27 00:10:17.662109] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.534 [2024-04-27 00:10:17.670919] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.534 [2024-04-27 00:10:17.671605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.671974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.671988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.534 [2024-04-27 00:10:17.671997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.534 [2024-04-27 00:10:17.672231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.534 [2024-04-27 00:10:17.672448] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.534 [2024-04-27 00:10:17.672456] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.534 [2024-04-27 00:10:17.672463] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.534 [2024-04-27 00:10:17.675944] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.534 [2024-04-27 00:10:17.684753] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.534 [2024-04-27 00:10:17.685296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.685644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.685657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.534 [2024-04-27 00:10:17.685666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.534 [2024-04-27 00:10:17.685909] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.534 [2024-04-27 00:10:17.686128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.534 [2024-04-27 00:10:17.686137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.534 [2024-04-27 00:10:17.686144] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.534 [2024-04-27 00:10:17.689628] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.534 [2024-04-27 00:10:17.698651] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.534 [2024-04-27 00:10:17.699312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.699658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.699670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.534 [2024-04-27 00:10:17.699679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.534 [2024-04-27 00:10:17.699926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.534 [2024-04-27 00:10:17.700145] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.534 [2024-04-27 00:10:17.700153] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.534 [2024-04-27 00:10:17.700160] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.534 [2024-04-27 00:10:17.703636] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.534 [2024-04-27 00:10:17.712455] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.534 [2024-04-27 00:10:17.712992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.713354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.713367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.534 [2024-04-27 00:10:17.713376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.534 [2024-04-27 00:10:17.713610] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.534 [2024-04-27 00:10:17.713827] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.534 [2024-04-27 00:10:17.713836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.534 [2024-04-27 00:10:17.713852] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.534 [2024-04-27 00:10:17.717331] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.534 [2024-04-27 00:10:17.726343] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.534 [2024-04-27 00:10:17.726937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.727299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.727312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.534 [2024-04-27 00:10:17.727321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.534 [2024-04-27 00:10:17.727555] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.534 [2024-04-27 00:10:17.727772] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.534 [2024-04-27 00:10:17.727780] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.534 [2024-04-27 00:10:17.727788] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.534 [2024-04-27 00:10:17.731274] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.534 [2024-04-27 00:10:17.740096] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.534 [2024-04-27 00:10:17.740730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.741153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.534 [2024-04-27 00:10:17.741167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.534 [2024-04-27 00:10:17.741176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.534 [2024-04-27 00:10:17.741410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.534 [2024-04-27 00:10:17.741632] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.534 [2024-04-27 00:10:17.741641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.534 [2024-04-27 00:10:17.741648] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.534 [2024-04-27 00:10:17.745130] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.797 [2024-04-27 00:10:17.753941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.797 [2024-04-27 00:10:17.754623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.754914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.754928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.797 [2024-04-27 00:10:17.754937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.797 [2024-04-27 00:10:17.755171] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.797 [2024-04-27 00:10:17.755389] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.797 [2024-04-27 00:10:17.755397] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.797 [2024-04-27 00:10:17.755404] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.797 [2024-04-27 00:10:17.758887] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.797 [2024-04-27 00:10:17.767695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.797 [2024-04-27 00:10:17.768261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.768615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.768625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.797 [2024-04-27 00:10:17.768632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.797 [2024-04-27 00:10:17.768852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.797 [2024-04-27 00:10:17.769067] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.797 [2024-04-27 00:10:17.769075] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.797 [2024-04-27 00:10:17.769082] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.797 [2024-04-27 00:10:17.772552] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.797 [2024-04-27 00:10:17.781565] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.797 [2024-04-27 00:10:17.782210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.782550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.782563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.797 [2024-04-27 00:10:17.782572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.797 [2024-04-27 00:10:17.782806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.797 [2024-04-27 00:10:17.783033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.797 [2024-04-27 00:10:17.783046] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.797 [2024-04-27 00:10:17.783053] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.797 [2024-04-27 00:10:17.786530] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.797 [2024-04-27 00:10:17.795344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.797 [2024-04-27 00:10:17.795944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.796349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.796361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.797 [2024-04-27 00:10:17.796371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.797 [2024-04-27 00:10:17.796604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.797 [2024-04-27 00:10:17.796822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.797 [2024-04-27 00:10:17.796830] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.797 [2024-04-27 00:10:17.796846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.797 [2024-04-27 00:10:17.800324] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.797 [2024-04-27 00:10:17.809145] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.797 [2024-04-27 00:10:17.809618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.809976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.809986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.797 [2024-04-27 00:10:17.809994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.797 [2024-04-27 00:10:17.810209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.797 [2024-04-27 00:10:17.810424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.797 [2024-04-27 00:10:17.810432] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.797 [2024-04-27 00:10:17.810438] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.797 [2024-04-27 00:10:17.813914] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.797 [2024-04-27 00:10:17.822921] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.797 [2024-04-27 00:10:17.823474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.823809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.823818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.797 [2024-04-27 00:10:17.823826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.797 [2024-04-27 00:10:17.824046] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.797 [2024-04-27 00:10:17.824260] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.797 [2024-04-27 00:10:17.824268] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.797 [2024-04-27 00:10:17.824279] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.797 [2024-04-27 00:10:17.827749] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.797 [2024-04-27 00:10:17.836767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.797 [2024-04-27 00:10:17.837327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.837632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.837643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.797 [2024-04-27 00:10:17.837651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.797 [2024-04-27 00:10:17.837870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.797 [2024-04-27 00:10:17.838086] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.797 [2024-04-27 00:10:17.838093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.797 [2024-04-27 00:10:17.838100] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.797 [2024-04-27 00:10:17.841571] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.797 [2024-04-27 00:10:17.850582] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.797 [2024-04-27 00:10:17.851259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.851504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.797 [2024-04-27 00:10:17.851518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.798 [2024-04-27 00:10:17.851527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.798 [2024-04-27 00:10:17.851762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.798 [2024-04-27 00:10:17.851985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.798 [2024-04-27 00:10:17.851993] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.798 [2024-04-27 00:10:17.852001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.798 [2024-04-27 00:10:17.855480] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.798 [2024-04-27 00:10:17.864575] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.798 [2024-04-27 00:10:17.865099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.865402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.865412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.798 [2024-04-27 00:10:17.865420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.798 [2024-04-27 00:10:17.865635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.798 [2024-04-27 00:10:17.865855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.798 [2024-04-27 00:10:17.865863] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.798 [2024-04-27 00:10:17.865870] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.798 [2024-04-27 00:10:17.869346] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.798 [2024-04-27 00:10:17.878380] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.798 [2024-04-27 00:10:17.878983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.879254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.879267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.798 [2024-04-27 00:10:17.879276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.798 [2024-04-27 00:10:17.879510] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.798 [2024-04-27 00:10:17.879728] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.798 [2024-04-27 00:10:17.879736] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.798 [2024-04-27 00:10:17.879743] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.798 [2024-04-27 00:10:17.883231] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.798 [2024-04-27 00:10:17.892246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.798 [2024-04-27 00:10:17.892896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.893315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.893328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.798 [2024-04-27 00:10:17.893337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.798 [2024-04-27 00:10:17.893571] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.798 [2024-04-27 00:10:17.893789] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.798 [2024-04-27 00:10:17.893797] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.798 [2024-04-27 00:10:17.893804] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.798 [2024-04-27 00:10:17.897296] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.798 [2024-04-27 00:10:17.906108] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.798 [2024-04-27 00:10:17.906795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.907198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.907211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.798 [2024-04-27 00:10:17.907221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.798 [2024-04-27 00:10:17.907454] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.798 [2024-04-27 00:10:17.907672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.798 [2024-04-27 00:10:17.907680] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.798 [2024-04-27 00:10:17.907688] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.798 [2024-04-27 00:10:17.911169] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.798 [2024-04-27 00:10:17.919859] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.798 [2024-04-27 00:10:17.920330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.920675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.920685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.798 [2024-04-27 00:10:17.920692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.798 [2024-04-27 00:10:17.920913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.798 [2024-04-27 00:10:17.921128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.798 [2024-04-27 00:10:17.921136] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.798 [2024-04-27 00:10:17.921143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.798 [2024-04-27 00:10:17.924615] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.798 [2024-04-27 00:10:17.933629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.798 [2024-04-27 00:10:17.934199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.934556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.934569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.798 [2024-04-27 00:10:17.934578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.798 [2024-04-27 00:10:17.934812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.798 [2024-04-27 00:10:17.935038] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.798 [2024-04-27 00:10:17.935048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.798 [2024-04-27 00:10:17.935056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.798 [2024-04-27 00:10:17.938544] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.798 [2024-04-27 00:10:17.947363] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.798 [2024-04-27 00:10:17.947989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.948399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.948412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.798 [2024-04-27 00:10:17.948421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.798 [2024-04-27 00:10:17.948655] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.798 [2024-04-27 00:10:17.948880] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.798 [2024-04-27 00:10:17.948889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.798 [2024-04-27 00:10:17.948897] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.798 [2024-04-27 00:10:17.952377] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.798 [2024-04-27 00:10:17.961188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.798 [2024-04-27 00:10:17.961915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.962308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.798 [2024-04-27 00:10:17.962321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.798 [2024-04-27 00:10:17.962330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.798 [2024-04-27 00:10:17.962564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.798 [2024-04-27 00:10:17.962782] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.798 [2024-04-27 00:10:17.962790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.798 [2024-04-27 00:10:17.962798] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.798 [2024-04-27 00:10:17.966282] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.798 [2024-04-27 00:10:17.975092] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.798 [2024-04-27 00:10:17.975678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.799 [2024-04-27 00:10:17.976050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.799 [2024-04-27 00:10:17.976064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.799 [2024-04-27 00:10:17.976073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.799 [2024-04-27 00:10:17.976307] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.799 [2024-04-27 00:10:17.976525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.799 [2024-04-27 00:10:17.976533] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.799 [2024-04-27 00:10:17.976541] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.799 [2024-04-27 00:10:17.980023] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.799 [2024-04-27 00:10:17.988831] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.799 [2024-04-27 00:10:17.989388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.799 [2024-04-27 00:10:17.989712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.799 [2024-04-27 00:10:17.989722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.799 [2024-04-27 00:10:17.989730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.799 [2024-04-27 00:10:17.989951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.799 [2024-04-27 00:10:17.990167] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.799 [2024-04-27 00:10:17.990174] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.799 [2024-04-27 00:10:17.990181] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.799 [2024-04-27 00:10:17.993655] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:47.799 [2024-04-27 00:10:18.002670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.799 [2024-04-27 00:10:18.003356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.799 [2024-04-27 00:10:18.003605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.799 [2024-04-27 00:10:18.003617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:47.799 [2024-04-27 00:10:18.003630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:47.799 [2024-04-27 00:10:18.003872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:47.799 [2024-04-27 00:10:18.004091] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:47.799 [2024-04-27 00:10:18.004099] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:47.799 [2024-04-27 00:10:18.004107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.799 [2024-04-27 00:10:18.007595] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.064 [2024-04-27 00:10:18.016405] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.064 [2024-04-27 00:10:18.016940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.064 [2024-04-27 00:10:18.017328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.064 [2024-04-27 00:10:18.017340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.064 [2024-04-27 00:10:18.017350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.064 [2024-04-27 00:10:18.017583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.064 [2024-04-27 00:10:18.017801] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.064 [2024-04-27 00:10:18.017810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.064 [2024-04-27 00:10:18.017817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.064 [2024-04-27 00:10:18.021301] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.064 [2024-04-27 00:10:18.030313] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.064 [2024-04-27 00:10:18.030758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.064 [2024-04-27 00:10:18.031127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.064 [2024-04-27 00:10:18.031138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.064 [2024-04-27 00:10:18.031146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.064 [2024-04-27 00:10:18.031362] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.064 [2024-04-27 00:10:18.031577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.064 [2024-04-27 00:10:18.031590] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.064 [2024-04-27 00:10:18.031596] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.064 [2024-04-27 00:10:18.035081] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.064 [2024-04-27 00:10:18.044097] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.064 [2024-04-27 00:10:18.044719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.064 [2024-04-27 00:10:18.045131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.064 [2024-04-27 00:10:18.045146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.064 [2024-04-27 00:10:18.045156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.064 [2024-04-27 00:10:18.045394] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.064 [2024-04-27 00:10:18.045612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.064 [2024-04-27 00:10:18.045620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.064 [2024-04-27 00:10:18.045627] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.064 [2024-04-27 00:10:18.049112] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.064 [2024-04-27 00:10:18.057920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.064 [2024-04-27 00:10:18.058527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.064 [2024-04-27 00:10:18.058870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.064 [2024-04-27 00:10:18.058881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.064 [2024-04-27 00:10:18.058888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.064 [2024-04-27 00:10:18.059103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.064 [2024-04-27 00:10:18.059318] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.064 [2024-04-27 00:10:18.059326] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.064 [2024-04-27 00:10:18.059333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.064 [2024-04-27 00:10:18.062822] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.064 [2024-04-27 00:10:18.071631] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.064 [2024-04-27 00:10:18.072346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.072665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.072679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-04-27 00:10:18.072688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.065 [2024-04-27 00:10:18.072929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.065 [2024-04-27 00:10:18.073147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.065 [2024-04-27 00:10:18.073155] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.065 [2024-04-27 00:10:18.073163] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.065 [2024-04-27 00:10:18.076641] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.065 [2024-04-27 00:10:18.085455] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.065 [2024-04-27 00:10:18.086160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.086515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.086528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-04-27 00:10:18.086537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.065 [2024-04-27 00:10:18.086771] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.065 [2024-04-27 00:10:18.086998] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.065 [2024-04-27 00:10:18.087007] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.065 [2024-04-27 00:10:18.087015] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.065 [2024-04-27 00:10:18.090494] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.065 [2024-04-27 00:10:18.099306] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.065 [2024-04-27 00:10:18.099780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.100097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.100107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-04-27 00:10:18.100115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.065 [2024-04-27 00:10:18.100330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.065 [2024-04-27 00:10:18.100544] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.065 [2024-04-27 00:10:18.100552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.065 [2024-04-27 00:10:18.100559] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.065 [2024-04-27 00:10:18.104045] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.065 [2024-04-27 00:10:18.113077] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.065 [2024-04-27 00:10:18.113707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.113949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.113963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-04-27 00:10:18.113973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.065 [2024-04-27 00:10:18.114206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.065 [2024-04-27 00:10:18.114424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.065 [2024-04-27 00:10:18.114432] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.065 [2024-04-27 00:10:18.114440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.065 [2024-04-27 00:10:18.117926] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.065 [2024-04-27 00:10:18.126957] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.065 [2024-04-27 00:10:18.127552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.127882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.127893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-04-27 00:10:18.127901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.065 [2024-04-27 00:10:18.128116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.065 [2024-04-27 00:10:18.128330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.065 [2024-04-27 00:10:18.128343] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.065 [2024-04-27 00:10:18.128350] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.065 [2024-04-27 00:10:18.131827] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.065 [2024-04-27 00:10:18.140879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.065 [2024-04-27 00:10:18.141523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.141874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.141888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-04-27 00:10:18.141897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.065 [2024-04-27 00:10:18.142131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.065 [2024-04-27 00:10:18.142349] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.065 [2024-04-27 00:10:18.142358] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.065 [2024-04-27 00:10:18.142366] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.065 [2024-04-27 00:10:18.145858] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.065 [2024-04-27 00:10:18.154684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.065 [2024-04-27 00:10:18.155382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.155732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.155745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-04-27 00:10:18.155754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.065 [2024-04-27 00:10:18.155996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.065 [2024-04-27 00:10:18.156214] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.065 [2024-04-27 00:10:18.156223] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.065 [2024-04-27 00:10:18.156230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.065 [2024-04-27 00:10:18.159713] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.065 [2024-04-27 00:10:18.168535] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.065 [2024-04-27 00:10:18.169053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.169298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.169311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-04-27 00:10:18.169320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.065 [2024-04-27 00:10:18.169554] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.065 [2024-04-27 00:10:18.169772] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.065 [2024-04-27 00:10:18.169781] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.065 [2024-04-27 00:10:18.169792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.065 [2024-04-27 00:10:18.173285] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.065 [2024-04-27 00:10:18.182315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.065 [2024-04-27 00:10:18.183026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.183378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-04-27 00:10:18.183391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-04-27 00:10:18.183400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.065 [2024-04-27 00:10:18.183633] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.065 [2024-04-27 00:10:18.183858] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.065 [2024-04-27 00:10:18.183867] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.066 [2024-04-27 00:10:18.183874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.066 [2024-04-27 00:10:18.187350] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.066 [2024-04-27 00:10:18.196163] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.066 [2024-04-27 00:10:18.196816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.197216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.197229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-04-27 00:10:18.197239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.066 [2024-04-27 00:10:18.197473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.066 [2024-04-27 00:10:18.197691] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.066 [2024-04-27 00:10:18.197699] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.066 [2024-04-27 00:10:18.197706] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.066 [2024-04-27 00:10:18.201390] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.066 [2024-04-27 00:10:18.210022] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.066 [2024-04-27 00:10:18.210559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.210860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.210871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-04-27 00:10:18.210879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.066 [2024-04-27 00:10:18.211095] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.066 [2024-04-27 00:10:18.211309] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.066 [2024-04-27 00:10:18.211317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.066 [2024-04-27 00:10:18.211324] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.066 [2024-04-27 00:10:18.214803] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.066 [2024-04-27 00:10:18.223818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.066 [2024-04-27 00:10:18.224472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.224829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.224848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-04-27 00:10:18.224858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.066 [2024-04-27 00:10:18.225091] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.066 [2024-04-27 00:10:18.225309] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.066 [2024-04-27 00:10:18.225318] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.066 [2024-04-27 00:10:18.225325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.066 [2024-04-27 00:10:18.228803] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.066 [2024-04-27 00:10:18.237624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.066 [2024-04-27 00:10:18.238185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.238511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.238521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-04-27 00:10:18.238529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.066 [2024-04-27 00:10:18.238744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.066 [2024-04-27 00:10:18.238964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.066 [2024-04-27 00:10:18.238972] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.066 [2024-04-27 00:10:18.238979] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.066 [2024-04-27 00:10:18.242452] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.066 [2024-04-27 00:10:18.251466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.066 [2024-04-27 00:10:18.252132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.252430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.252444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-04-27 00:10:18.252453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.066 [2024-04-27 00:10:18.252687] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.066 [2024-04-27 00:10:18.252913] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.066 [2024-04-27 00:10:18.252922] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.066 [2024-04-27 00:10:18.252929] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.066 [2024-04-27 00:10:18.256408] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.066 [2024-04-27 00:10:18.265228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.066 [2024-04-27 00:10:18.265666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.265913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.265924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-04-27 00:10:18.265932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.066 [2024-04-27 00:10:18.266147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.066 [2024-04-27 00:10:18.266362] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.066 [2024-04-27 00:10:18.266369] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.066 [2024-04-27 00:10:18.266376] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.066 [2024-04-27 00:10:18.269851] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.066 [2024-04-27 00:10:18.279072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.066 [2024-04-27 00:10:18.279525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.279850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-04-27 00:10:18.279861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-04-27 00:10:18.279868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.066 [2024-04-27 00:10:18.280084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.066 [2024-04-27 00:10:18.280298] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.066 [2024-04-27 00:10:18.280306] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.066 [2024-04-27 00:10:18.280313] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.329 [2024-04-27 00:10:18.283786] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.329 [2024-04-27 00:10:18.292805] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.329 [2024-04-27 00:10:18.293477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.293827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.293846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-04-27 00:10:18.293856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.329 [2024-04-27 00:10:18.294090] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.329 [2024-04-27 00:10:18.294308] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.329 [2024-04-27 00:10:18.294317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.329 [2024-04-27 00:10:18.294325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.329 [2024-04-27 00:10:18.297803] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.329 [2024-04-27 00:10:18.306614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.329 [2024-04-27 00:10:18.307198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.307400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.307410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-04-27 00:10:18.307418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.329 [2024-04-27 00:10:18.307634] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.329 [2024-04-27 00:10:18.307853] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.329 [2024-04-27 00:10:18.307861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.329 [2024-04-27 00:10:18.307868] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.329 [2024-04-27 00:10:18.311355] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.329 [2024-04-27 00:10:18.320375] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.329 [2024-04-27 00:10:18.321069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.321420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.321433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-04-27 00:10:18.321442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.329 [2024-04-27 00:10:18.321676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.329 [2024-04-27 00:10:18.321900] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.329 [2024-04-27 00:10:18.321910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.329 [2024-04-27 00:10:18.321917] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.329 [2024-04-27 00:10:18.325396] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.329 [2024-04-27 00:10:18.334212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.329 [2024-04-27 00:10:18.334769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.334979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.334989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-04-27 00:10:18.334996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.329 [2024-04-27 00:10:18.335212] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.329 [2024-04-27 00:10:18.335426] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.329 [2024-04-27 00:10:18.335433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.329 [2024-04-27 00:10:18.335440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.329 [2024-04-27 00:10:18.338923] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.329 [2024-04-27 00:10:18.347942] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.329 [2024-04-27 00:10:18.348582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.348936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.348955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-04-27 00:10:18.348964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.329 [2024-04-27 00:10:18.349198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.329 [2024-04-27 00:10:18.349416] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.329 [2024-04-27 00:10:18.349425] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.329 [2024-04-27 00:10:18.349432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.329 [2024-04-27 00:10:18.352916] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.329 [2024-04-27 00:10:18.361729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.329 [2024-04-27 00:10:18.362395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.362823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-04-27 00:10:18.362836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-04-27 00:10:18.362852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.329 [2024-04-27 00:10:18.363086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.329 [2024-04-27 00:10:18.363304] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.329 [2024-04-27 00:10:18.363312] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.329 [2024-04-27 00:10:18.363320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 546737 Killed "${NVMF_APP[@]}" "$@" 00:25:48.329 00:10:18 -- host/bdevperf.sh@36 -- # tgt_init 00:25:48.329 [2024-04-27 00:10:18.366797] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.329 00:10:18 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:48.329 00:10:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:48.329 00:10:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:48.329 00:10:18 -- common/autotest_common.sh@10 -- # set +x 00:25:48.329 00:10:18 -- nvmf/common.sh@470 -- # nvmfpid=548354 00:25:48.329 00:10:18 -- nvmf/common.sh@471 -- # waitforlisten 548354 00:25:48.329 [2024-04-27 00:10:18.375613] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.330 00:10:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:48.330 00:10:18 -- common/autotest_common.sh@817 -- # '[' -z 548354 ']' 00:25:48.330 [2024-04-27 00:10:18.375949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 00:10:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.330 [2024-04-27 00:10:18.376221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.376232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.330 [2024-04-27 00:10:18.376239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.330 00:10:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:48.330 [2024-04-27 00:10:18.376455] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.330 00:10:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.330 [2024-04-27 00:10:18.376670] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.330 [2024-04-27 00:10:18.376679] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.330 [2024-04-27 00:10:18.376686] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.330 00:10:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:48.330 00:10:18 -- common/autotest_common.sh@10 -- # set +x 00:25:48.330 [2024-04-27 00:10:18.380169] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.330 [2024-04-27 00:10:18.389387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.330 [2024-04-27 00:10:18.389922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.390293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.390306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.330 [2024-04-27 00:10:18.390316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.330 [2024-04-27 00:10:18.390550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.330 [2024-04-27 00:10:18.390768] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.330 [2024-04-27 00:10:18.390776] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.330 [2024-04-27 00:10:18.390783] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.330 [2024-04-27 00:10:18.394272] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.330 [2024-04-27 00:10:18.403291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.330 [2024-04-27 00:10:18.403954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.404337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.404351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.330 [2024-04-27 00:10:18.404360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.330 [2024-04-27 00:10:18.404594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.330 [2024-04-27 00:10:18.404813] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.330 [2024-04-27 00:10:18.404821] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.330 [2024-04-27 00:10:18.404828] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.330 [2024-04-27 00:10:18.408313] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.330 [2024-04-27 00:10:18.417150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.330 [2024-04-27 00:10:18.417851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.418257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.418270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.330 [2024-04-27 00:10:18.418279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.330 [2024-04-27 00:10:18.418514] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.330 [2024-04-27 00:10:18.418736] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.330 [2024-04-27 00:10:18.418746] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.330 [2024-04-27 00:10:18.418753] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.330 [2024-04-27 00:10:18.422240] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.330 [2024-04-27 00:10:18.423915] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:25:48.330 [2024-04-27 00:10:18.423961] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.330 [2024-04-27 00:10:18.431055] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.330 [2024-04-27 00:10:18.431633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.432004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.432016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.330 [2024-04-27 00:10:18.432024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.330 [2024-04-27 00:10:18.432240] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.330 [2024-04-27 00:10:18.432455] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.330 [2024-04-27 00:10:18.432463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.330 [2024-04-27 00:10:18.432470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.330 [2024-04-27 00:10:18.435955] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.330 [2024-04-27 00:10:18.444816] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.330 [2024-04-27 00:10:18.445285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.445533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.445543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.330 [2024-04-27 00:10:18.445551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.330 [2024-04-27 00:10:18.445766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.330 [2024-04-27 00:10:18.445986] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.330 [2024-04-27 00:10:18.445994] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.330 [2024-04-27 00:10:18.446001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.330 [2024-04-27 00:10:18.449473] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.330 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.330 [2024-04-27 00:10:18.458692] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.330 [2024-04-27 00:10:18.459170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.459512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.459522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.330 [2024-04-27 00:10:18.459533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.330 [2024-04-27 00:10:18.459749] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.330 [2024-04-27 00:10:18.459969] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.330 [2024-04-27 00:10:18.459977] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.330 [2024-04-27 00:10:18.459984] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.330 [2024-04-27 00:10:18.463455] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.330 [2024-04-27 00:10:18.472466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.330 [2024-04-27 00:10:18.473045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.473376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-04-27 00:10:18.473386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.330 [2024-04-27 00:10:18.473393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.330 [2024-04-27 00:10:18.473608] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.330 [2024-04-27 00:10:18.473822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.330 [2024-04-27 00:10:18.473829] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.330 [2024-04-27 00:10:18.473839] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.331 [2024-04-27 00:10:18.477315] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.331 [2024-04-27 00:10:18.486327] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.331 [2024-04-27 00:10:18.486881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-04-27 00:10:18.487182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-04-27 00:10:18.487191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.331 [2024-04-27 00:10:18.487199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.331 [2024-04-27 00:10:18.487414] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.331 [2024-04-27 00:10:18.487627] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.331 [2024-04-27 00:10:18.487636] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.331 [2024-04-27 00:10:18.487643] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.331 [2024-04-27 00:10:18.489350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:48.331 [2024-04-27 00:10:18.491118] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.331 [2024-04-27 00:10:18.500134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.331 [2024-04-27 00:10:18.500700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-04-27 00:10:18.501078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-04-27 00:10:18.501088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.331 [2024-04-27 00:10:18.501099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.331 [2024-04-27 00:10:18.501315] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.331 [2024-04-27 00:10:18.501529] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.331 [2024-04-27 00:10:18.501537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.331 [2024-04-27 00:10:18.501543] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.331 [2024-04-27 00:10:18.505018] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.331 [2024-04-27 00:10:18.514046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.331 [2024-04-27 00:10:18.514600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-04-27 00:10:18.514849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-04-27 00:10:18.514860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.331 [2024-04-27 00:10:18.514868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.331 [2024-04-27 00:10:18.515083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.331 [2024-04-27 00:10:18.515298] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.331 [2024-04-27 00:10:18.515306] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.331 [2024-04-27 00:10:18.515312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.331 [2024-04-27 00:10:18.518785] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.331 [2024-04-27 00:10:18.527809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.331 [2024-04-27 00:10:18.528376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-04-27 00:10:18.528682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-04-27 00:10:18.528692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.331 [2024-04-27 00:10:18.528700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.331 [2024-04-27 00:10:18.528919] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.331 [2024-04-27 00:10:18.529134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.331 [2024-04-27 00:10:18.529142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.331 [2024-04-27 00:10:18.529150] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.331 [2024-04-27 00:10:18.532619] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.331 [2024-04-27 00:10:18.541557] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.331 [2024-04-27 00:10:18.542061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-04-27 00:10:18.542414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-04-27 00:10:18.542423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.331 [2024-04-27 00:10:18.542431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.331 [2024-04-27 00:10:18.542650] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.331 [2024-04-27 00:10:18.542868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.331 [2024-04-27 00:10:18.542876] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.331 [2024-04-27 00:10:18.542883] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.331 [2024-04-27 00:10:18.546354] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.593 [2024-04-27 00:10:18.552387] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.593 [2024-04-27 00:10:18.552415] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.593 [2024-04-27 00:10:18.552422] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.593 [2024-04-27 00:10:18.552428] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.593 [2024-04-27 00:10:18.552434] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.593 [2024-04-27 00:10:18.552535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:48.593 [2024-04-27 00:10:18.552689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.593 [2024-04-27 00:10:18.552690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:48.593 [2024-04-27 00:10:18.555365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.593 [2024-04-27 00:10:18.555930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.593 [2024-04-27 00:10:18.556262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.593 [2024-04-27 00:10:18.556272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.593 [2024-04-27 00:10:18.556280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.593 [2024-04-27 00:10:18.556494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.593 [2024-04-27 00:10:18.556708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.593 [2024-04-27 00:10:18.556716] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.593 [2024-04-27 00:10:18.556723] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.593 [2024-04-27 00:10:18.560198] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.593 [2024-04-27 00:10:18.569204] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.593 [2024-04-27 00:10:18.569767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.593 [2024-04-27 00:10:18.569973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.593 [2024-04-27 00:10:18.569984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.593 [2024-04-27 00:10:18.569991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.593 [2024-04-27 00:10:18.570206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.593 [2024-04-27 00:10:18.570421] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.593 [2024-04-27 00:10:18.570429] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.593 [2024-04-27 00:10:18.570436] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.593 [2024-04-27 00:10:18.573909] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.594 [2024-04-27 00:10:18.582930] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.594 [2024-04-27 00:10:18.583453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.583721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.583735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.594 [2024-04-27 00:10:18.583746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.594 [2024-04-27 00:10:18.583994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.594 [2024-04-27 00:10:18.584213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.594 [2024-04-27 00:10:18.584221] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.594 [2024-04-27 00:10:18.584228] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.594 [2024-04-27 00:10:18.587708] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.594 [2024-04-27 00:10:18.596720] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.594 [2024-04-27 00:10:18.597435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.597792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.597805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.594 [2024-04-27 00:10:18.597815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.594 [2024-04-27 00:10:18.598060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.594 [2024-04-27 00:10:18.598278] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.594 [2024-04-27 00:10:18.598287] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.594 [2024-04-27 00:10:18.598295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.594 [2024-04-27 00:10:18.601773] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.594 [2024-04-27 00:10:18.610594] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.594 [2024-04-27 00:10:18.611276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.611639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.611652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.594 [2024-04-27 00:10:18.611662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.594 [2024-04-27 00:10:18.611903] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.594 [2024-04-27 00:10:18.612122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.594 [2024-04-27 00:10:18.612131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.594 [2024-04-27 00:10:18.612138] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.594 [2024-04-27 00:10:18.615614] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.594 [2024-04-27 00:10:18.624426] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.594 [2024-04-27 00:10:18.625155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.625370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.625382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.594 [2024-04-27 00:10:18.625392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.594 [2024-04-27 00:10:18.625626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.594 [2024-04-27 00:10:18.625851] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.594 [2024-04-27 00:10:18.625861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.594 [2024-04-27 00:10:18.625868] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.594 [2024-04-27 00:10:18.629347] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.594 [2024-04-27 00:10:18.638164] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.594 [2024-04-27 00:10:18.638724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.638929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.638941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.594 [2024-04-27 00:10:18.638948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.594 [2024-04-27 00:10:18.639164] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.594 [2024-04-27 00:10:18.639379] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.594 [2024-04-27 00:10:18.639387] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.594 [2024-04-27 00:10:18.639393] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.594 [2024-04-27 00:10:18.642868] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.594 [2024-04-27 00:10:18.652079] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.594 [2024-04-27 00:10:18.652619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.652998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.653012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.594 [2024-04-27 00:10:18.653022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.594 [2024-04-27 00:10:18.653256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.594 [2024-04-27 00:10:18.653473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.594 [2024-04-27 00:10:18.653483] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.594 [2024-04-27 00:10:18.653490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.594 [2024-04-27 00:10:18.656972] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.594 [2024-04-27 00:10:18.665983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.594 [2024-04-27 00:10:18.666648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.666891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.666905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.594 [2024-04-27 00:10:18.666914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.594 [2024-04-27 00:10:18.667149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.594 [2024-04-27 00:10:18.667367] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.594 [2024-04-27 00:10:18.667375] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.594 [2024-04-27 00:10:18.667382] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.594 [2024-04-27 00:10:18.670865] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.594 [2024-04-27 00:10:18.679877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.594 [2024-04-27 00:10:18.680450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.680809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.680821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.594 [2024-04-27 00:10:18.680831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.594 [2024-04-27 00:10:18.681072] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.594 [2024-04-27 00:10:18.681291] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.594 [2024-04-27 00:10:18.681299] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.594 [2024-04-27 00:10:18.681306] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.594 [2024-04-27 00:10:18.684784] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.594 [2024-04-27 00:10:18.693592] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.594 [2024-04-27 00:10:18.694252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.694480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.594 [2024-04-27 00:10:18.694492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.594 [2024-04-27 00:10:18.694502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.594 [2024-04-27 00:10:18.694736] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.594 [2024-04-27 00:10:18.694960] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.595 [2024-04-27 00:10:18.694969] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.595 [2024-04-27 00:10:18.694976] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.595 [2024-04-27 00:10:18.698453] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.595 [2024-04-27 00:10:18.707465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.595 [2024-04-27 00:10:18.708170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.708604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.708617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.595 [2024-04-27 00:10:18.708630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.595 [2024-04-27 00:10:18.708880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.595 [2024-04-27 00:10:18.709099] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.595 [2024-04-27 00:10:18.709107] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.595 [2024-04-27 00:10:18.709114] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.595 [2024-04-27 00:10:18.712591] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.595 [2024-04-27 00:10:18.721191] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.595 [2024-04-27 00:10:18.721628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.722110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.722146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.595 [2024-04-27 00:10:18.722156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.595 [2024-04-27 00:10:18.722390] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.595 [2024-04-27 00:10:18.722608] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.595 [2024-04-27 00:10:18.722616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.595 [2024-04-27 00:10:18.722624] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.595 [2024-04-27 00:10:18.726106] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.595 [2024-04-27 00:10:18.734922] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.595 [2024-04-27 00:10:18.735485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.735847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.735860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.595 [2024-04-27 00:10:18.735869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.595 [2024-04-27 00:10:18.736103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.595 [2024-04-27 00:10:18.736322] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.595 [2024-04-27 00:10:18.736330] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.595 [2024-04-27 00:10:18.736337] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.595 [2024-04-27 00:10:18.739820] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.595 [2024-04-27 00:10:18.748836] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.595 [2024-04-27 00:10:18.749494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.749935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.749949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.595 [2024-04-27 00:10:18.749959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.595 [2024-04-27 00:10:18.750197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.595 [2024-04-27 00:10:18.750415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.595 [2024-04-27 00:10:18.750423] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.595 [2024-04-27 00:10:18.750430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.595 [2024-04-27 00:10:18.753912] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.595 [2024-04-27 00:10:18.762718] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.595 [2024-04-27 00:10:18.763269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.763640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.763652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.595 [2024-04-27 00:10:18.763662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.595 [2024-04-27 00:10:18.763902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.595 [2024-04-27 00:10:18.764120] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.595 [2024-04-27 00:10:18.764128] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.595 [2024-04-27 00:10:18.764136] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.595 [2024-04-27 00:10:18.767611] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.595 [2024-04-27 00:10:18.776621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.595 [2024-04-27 00:10:18.777288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.777659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.777671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.595 [2024-04-27 00:10:18.777680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.595 [2024-04-27 00:10:18.777922] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.595 [2024-04-27 00:10:18.778140] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.595 [2024-04-27 00:10:18.778149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.595 [2024-04-27 00:10:18.778156] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.595 [2024-04-27 00:10:18.781631] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.595 [2024-04-27 00:10:18.790435] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.595 [2024-04-27 00:10:18.790938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.791191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.791203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.595 [2024-04-27 00:10:18.791212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.595 [2024-04-27 00:10:18.791446] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.595 [2024-04-27 00:10:18.791669] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.595 [2024-04-27 00:10:18.791677] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.595 [2024-04-27 00:10:18.791684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.595 [2024-04-27 00:10:18.795169] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.595 [2024-04-27 00:10:18.804177] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.595 [2024-04-27 00:10:18.804742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.805096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.595 [2024-04-27 00:10:18.805107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.595 [2024-04-27 00:10:18.805114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.595 [2024-04-27 00:10:18.805330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.595 [2024-04-27 00:10:18.805545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.595 [2024-04-27 00:10:18.805552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.595 [2024-04-27 00:10:18.805560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.595 [2024-04-27 00:10:18.809041] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.858 [2024-04-27 00:10:18.818053] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.858 [2024-04-27 00:10:18.818620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.818877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.818896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.858 [2024-04-27 00:10:18.818904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.858 [2024-04-27 00:10:18.819124] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.858 [2024-04-27 00:10:18.819339] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.858 [2024-04-27 00:10:18.819346] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.858 [2024-04-27 00:10:18.819353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.858 [2024-04-27 00:10:18.822828] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.858 [2024-04-27 00:10:18.831920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.858 [2024-04-27 00:10:18.832580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.832957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.832972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.858 [2024-04-27 00:10:18.832982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.858 [2024-04-27 00:10:18.833216] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.858 [2024-04-27 00:10:18.833434] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.858 [2024-04-27 00:10:18.833447] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.858 [2024-04-27 00:10:18.833454] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.858 [2024-04-27 00:10:18.836940] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.858 [2024-04-27 00:10:18.845754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.858 [2024-04-27 00:10:18.846430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.846787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.846799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.858 [2024-04-27 00:10:18.846809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.858 [2024-04-27 00:10:18.847049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.858 [2024-04-27 00:10:18.847268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.858 [2024-04-27 00:10:18.847275] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.858 [2024-04-27 00:10:18.847283] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.858 [2024-04-27 00:10:18.850757] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.858 [2024-04-27 00:10:18.859562] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.858 [2024-04-27 00:10:18.860081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.860445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.860458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.858 [2024-04-27 00:10:18.860467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.858 [2024-04-27 00:10:18.860701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.858 [2024-04-27 00:10:18.860923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.858 [2024-04-27 00:10:18.860933] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.858 [2024-04-27 00:10:18.860941] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.858 [2024-04-27 00:10:18.864420] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.858 [2024-04-27 00:10:18.873433] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.858 [2024-04-27 00:10:18.874114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.874333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.874346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.858 [2024-04-27 00:10:18.874355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.858 [2024-04-27 00:10:18.874589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.858 [2024-04-27 00:10:18.874807] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.858 [2024-04-27 00:10:18.874815] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.858 [2024-04-27 00:10:18.874827] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.858 [2024-04-27 00:10:18.878309] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.858 [2024-04-27 00:10:18.887330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.858 [2024-04-27 00:10:18.887961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.888192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.888204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.858 [2024-04-27 00:10:18.888213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.858 [2024-04-27 00:10:18.888447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.858 [2024-04-27 00:10:18.888666] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.858 [2024-04-27 00:10:18.888674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.858 [2024-04-27 00:10:18.888682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.858 [2024-04-27 00:10:18.892163] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.858 [2024-04-27 00:10:18.901170] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.858 [2024-04-27 00:10:18.901687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.902077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.902091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.858 [2024-04-27 00:10:18.902101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.858 [2024-04-27 00:10:18.902334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.858 [2024-04-27 00:10:18.902552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.858 [2024-04-27 00:10:18.902561] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.858 [2024-04-27 00:10:18.902568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.858 [2024-04-27 00:10:18.906047] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.858 [2024-04-27 00:10:18.915068] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.858 [2024-04-27 00:10:18.915713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.916088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-04-27 00:10:18.916103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.858 [2024-04-27 00:10:18.916112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.858 [2024-04-27 00:10:18.916346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.858 [2024-04-27 00:10:18.916564] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.858 [2024-04-27 00:10:18.916573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.858 [2024-04-27 00:10:18.916580] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.858 [2024-04-27 00:10:18.920066] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.858 [2024-04-27 00:10:18.928876] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.858 [2024-04-27 00:10:18.929321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.929672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.929682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.859 [2024-04-27 00:10:18.929689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.859 [2024-04-27 00:10:18.929909] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.859 [2024-04-27 00:10:18.930124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.859 [2024-04-27 00:10:18.930132] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.859 [2024-04-27 00:10:18.930139] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.859 [2024-04-27 00:10:18.933608] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.859 [2024-04-27 00:10:18.942629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.859 [2024-04-27 00:10:18.943195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.943567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.943580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.859 [2024-04-27 00:10:18.943589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.859 [2024-04-27 00:10:18.943823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.859 [2024-04-27 00:10:18.944048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.859 [2024-04-27 00:10:18.944058] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.859 [2024-04-27 00:10:18.944065] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.859 [2024-04-27 00:10:18.947540] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.859 [2024-04-27 00:10:18.956344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.859 [2024-04-27 00:10:18.956920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.957294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.957307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.859 [2024-04-27 00:10:18.957316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.859 [2024-04-27 00:10:18.957550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.859 [2024-04-27 00:10:18.957767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.859 [2024-04-27 00:10:18.957775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.859 [2024-04-27 00:10:18.957783] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.859 [2024-04-27 00:10:18.961263] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.859 [2024-04-27 00:10:18.970073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.859 [2024-04-27 00:10:18.970480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.970723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.970734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.859 [2024-04-27 00:10:18.970742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.859 [2024-04-27 00:10:18.970968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.859 [2024-04-27 00:10:18.971185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.859 [2024-04-27 00:10:18.971192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.859 [2024-04-27 00:10:18.971199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.859 [2024-04-27 00:10:18.974672] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.859 [2024-04-27 00:10:18.983885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.859 [2024-04-27 00:10:18.984531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.984899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.984913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.859 [2024-04-27 00:10:18.984922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.859 [2024-04-27 00:10:18.985156] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.859 [2024-04-27 00:10:18.985374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.859 [2024-04-27 00:10:18.985383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.859 [2024-04-27 00:10:18.985390] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.859 [2024-04-27 00:10:18.988871] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.859 [2024-04-27 00:10:18.997673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.859 [2024-04-27 00:10:18.998354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.998564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:18.998576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.859 [2024-04-27 00:10:18.998585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.859 [2024-04-27 00:10:18.998819] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.859 [2024-04-27 00:10:18.999044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.859 [2024-04-27 00:10:18.999053] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.859 [2024-04-27 00:10:18.999060] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.859 [2024-04-27 00:10:19.002535] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.859 [2024-04-27 00:10:19.011555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.859 [2024-04-27 00:10:19.012233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:19.012452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:19.012465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.859 [2024-04-27 00:10:19.012474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.859 [2024-04-27 00:10:19.012708] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.859 [2024-04-27 00:10:19.012931] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.859 [2024-04-27 00:10:19.012940] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.859 [2024-04-27 00:10:19.012947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.859 [2024-04-27 00:10:19.016425] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.859 [2024-04-27 00:10:19.025437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.859 [2024-04-27 00:10:19.025966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:19.026340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:19.026353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.859 [2024-04-27 00:10:19.026362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.859 [2024-04-27 00:10:19.026595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.859 [2024-04-27 00:10:19.026814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.859 [2024-04-27 00:10:19.026823] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.859 [2024-04-27 00:10:19.026830] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.859 [2024-04-27 00:10:19.030314] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.859 [2024-04-27 00:10:19.039346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.859 [2024-04-27 00:10:19.040115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:19.040355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-04-27 00:10:19.040370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.859 [2024-04-27 00:10:19.040379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.859 [2024-04-27 00:10:19.040613] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.859 [2024-04-27 00:10:19.040832] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.859 [2024-04-27 00:10:19.040848] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.859 [2024-04-27 00:10:19.040856] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.860 [2024-04-27 00:10:19.044334] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.860 [2024-04-27 00:10:19.053150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.860 [2024-04-27 00:10:19.053884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-04-27 00:10:19.054117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-04-27 00:10:19.054135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.860 [2024-04-27 00:10:19.054145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.860 [2024-04-27 00:10:19.054379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.860 [2024-04-27 00:10:19.054597] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.860 [2024-04-27 00:10:19.054605] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.860 [2024-04-27 00:10:19.054612] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.860 [2024-04-27 00:10:19.058098] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.860 [2024-04-27 00:10:19.066911] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.860 [2024-04-27 00:10:19.067590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-04-27 00:10:19.067962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-04-27 00:10:19.067977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:48.860 [2024-04-27 00:10:19.067987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:48.860 [2024-04-27 00:10:19.068220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:48.860 [2024-04-27 00:10:19.068438] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.860 [2024-04-27 00:10:19.068447] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.860 [2024-04-27 00:10:19.068454] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.860 [2024-04-27 00:10:19.071939] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.122 [2024-04-27 00:10:19.080745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.122 [2024-04-27 00:10:19.081370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.081702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.081712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.122 [2024-04-27 00:10:19.081720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.122 [2024-04-27 00:10:19.081940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.122 [2024-04-27 00:10:19.082154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.122 [2024-04-27 00:10:19.082163] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.122 [2024-04-27 00:10:19.082170] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.122 [2024-04-27 00:10:19.085640] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.122 [2024-04-27 00:10:19.094653] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.122 [2024-04-27 00:10:19.095345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.095710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.095722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.122 [2024-04-27 00:10:19.095736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.122 [2024-04-27 00:10:19.095979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.122 [2024-04-27 00:10:19.096198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.122 [2024-04-27 00:10:19.096207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.122 [2024-04-27 00:10:19.096215] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.122 [2024-04-27 00:10:19.099691] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.122 [2024-04-27 00:10:19.108504] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.122 [2024-04-27 00:10:19.109190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.109546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.109559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.122 [2024-04-27 00:10:19.109569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.122 [2024-04-27 00:10:19.109802] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.122 [2024-04-27 00:10:19.110038] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.122 [2024-04-27 00:10:19.110047] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.122 [2024-04-27 00:10:19.110054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.122 [2024-04-27 00:10:19.113532] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.122 [2024-04-27 00:10:19.122343] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.122 [2024-04-27 00:10:19.123103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.123464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.123477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.122 [2024-04-27 00:10:19.123487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.122 [2024-04-27 00:10:19.123720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.122 [2024-04-27 00:10:19.123946] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.122 [2024-04-27 00:10:19.123955] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.122 [2024-04-27 00:10:19.123963] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.122 [2024-04-27 00:10:19.127440] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.122 [2024-04-27 00:10:19.136262] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.122 [2024-04-27 00:10:19.136830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.137019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.137029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.122 [2024-04-27 00:10:19.137037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.122 [2024-04-27 00:10:19.137257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.122 [2024-04-27 00:10:19.137473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.122 [2024-04-27 00:10:19.137481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.122 [2024-04-27 00:10:19.137488] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.122 [2024-04-27 00:10:19.140969] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.122 [2024-04-27 00:10:19.150000] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.122 [2024-04-27 00:10:19.150601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.150656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.150665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.122 [2024-04-27 00:10:19.150673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.122 [2024-04-27 00:10:19.150893] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.122 [2024-04-27 00:10:19.151109] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.122 [2024-04-27 00:10:19.151118] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.122 [2024-04-27 00:10:19.151125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.122 [2024-04-27 00:10:19.154595] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.122 [2024-04-27 00:10:19.163815] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.122 [2024-04-27 00:10:19.164415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.164580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.164590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.122 [2024-04-27 00:10:19.164597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.122 [2024-04-27 00:10:19.164813] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.122 [2024-04-27 00:10:19.165033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.122 [2024-04-27 00:10:19.165041] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.122 [2024-04-27 00:10:19.165048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.122 [2024-04-27 00:10:19.168520] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.122 [2024-04-27 00:10:19.177532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.122 [2024-04-27 00:10:19.177973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.122 [2024-04-27 00:10:19.178189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.178202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.123 [2024-04-27 00:10:19.178211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.123 [2024-04-27 00:10:19.178445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.123 [2024-04-27 00:10:19.178668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.123 [2024-04-27 00:10:19.178677] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.123 [2024-04-27 00:10:19.178684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.123 [2024-04-27 00:10:19.182167] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.123 [2024-04-27 00:10:19.191383] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.123 [2024-04-27 00:10:19.191956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.192362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.192375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.123 [2024-04-27 00:10:19.192384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.123 [2024-04-27 00:10:19.192618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.123 [2024-04-27 00:10:19.192836] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.123 [2024-04-27 00:10:19.192854] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.123 [2024-04-27 00:10:19.192861] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.123 [2024-04-27 00:10:19.196343] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.123 00:10:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:49.123 00:10:19 -- common/autotest_common.sh@850 -- # return 0 00:25:49.123 00:10:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:49.123 00:10:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:49.123 00:10:19 -- common/autotest_common.sh@10 -- # set +x 00:25:49.123 [2024-04-27 00:10:19.205176] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.123 [2024-04-27 00:10:19.205828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.206092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.206105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.123 [2024-04-27 00:10:19.206114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.123 [2024-04-27 00:10:19.206348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.123 [2024-04-27 00:10:19.206567] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.123 [2024-04-27 00:10:19.206575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.123 [2024-04-27 00:10:19.206582] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.123 [2024-04-27 00:10:19.210077] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.123 [2024-04-27 00:10:19.219094] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.123 [2024-04-27 00:10:19.219799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.220184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.220197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.123 [2024-04-27 00:10:19.220206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.123 [2024-04-27 00:10:19.220448] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.123 [2024-04-27 00:10:19.220666] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.123 [2024-04-27 00:10:19.220675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.123 [2024-04-27 00:10:19.220682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.123 [2024-04-27 00:10:19.224165] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.123 [2024-04-27 00:10:19.232983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.123 [2024-04-27 00:10:19.233699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.234084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.234099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.123 [2024-04-27 00:10:19.234109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.123 [2024-04-27 00:10:19.234343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.123 [2024-04-27 00:10:19.234560] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.123 [2024-04-27 00:10:19.234570] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.123 [2024-04-27 00:10:19.234577] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.123 00:10:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.123 00:10:19 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.123 [2024-04-27 00:10:19.238072] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.123 00:10:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.123 00:10:19 -- common/autotest_common.sh@10 -- # set +x 00:25:49.123 [2024-04-27 00:10:19.243733] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.123 [2024-04-27 00:10:19.246888] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.123 [2024-04-27 00:10:19.247333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.247664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.247677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.123 [2024-04-27 00:10:19.247684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.123 [2024-04-27 00:10:19.247908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.123 [2024-04-27 00:10:19.248123] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.123 [2024-04-27 00:10:19.248131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.123 [2024-04-27 00:10:19.248138] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.123 00:10:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.123 00:10:19 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:49.123 00:10:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.123 00:10:19 -- common/autotest_common.sh@10 -- # set +x 00:25:49.123 [2024-04-27 00:10:19.251614] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.123 [2024-04-27 00:10:19.260623] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.123 [2024-04-27 00:10:19.261273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.261627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.261640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.123 [2024-04-27 00:10:19.261650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.123 [2024-04-27 00:10:19.261893] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.123 [2024-04-27 00:10:19.262111] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.123 [2024-04-27 00:10:19.262120] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.123 [2024-04-27 00:10:19.262127] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.123 [2024-04-27 00:10:19.265603] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.123 [2024-04-27 00:10:19.274426] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.123 [2024-04-27 00:10:19.275129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.275535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.123 [2024-04-27 00:10:19.275548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.123 [2024-04-27 00:10:19.275557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.123 [2024-04-27 00:10:19.275793] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.123 [2024-04-27 00:10:19.276018] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.123 [2024-04-27 00:10:19.276027] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.123 [2024-04-27 00:10:19.276035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.123 Malloc0 00:25:49.123 00:10:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.123 00:10:19 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:49.123 00:10:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.124 00:10:19 -- common/autotest_common.sh@10 -- # set +x 00:25:49.124 [2024-04-27 00:10:19.279511] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.124 [2024-04-27 00:10:19.288320] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.124 00:10:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.124 [2024-04-27 00:10:19.288768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.124 00:10:19 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.124 [2024-04-27 00:10:19.289161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.124 [2024-04-27 00:10:19.289172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.124 [2024-04-27 00:10:19.289180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.124 00:10:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.124 [2024-04-27 00:10:19.289397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.124 00:10:19 -- common/autotest_common.sh@10 -- # set +x 00:25:49.124 [2024-04-27 00:10:19.289613] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.124 [2024-04-27 00:10:19.289621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.124 [2024-04-27 00:10:19.289633] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.124 [2024-04-27 00:10:19.293110] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.124 00:10:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.124 00:10:19 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.124 00:10:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.124 00:10:19 -- common/autotest_common.sh@10 -- # set +x 00:25:49.124 [2024-04-27 00:10:19.302122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.124 [2024-04-27 00:10:19.302805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.124 [2024-04-27 00:10:19.303191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.124 [2024-04-27 00:10:19.303204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x965630 with addr=10.0.0.2, port=4420 00:25:49.124 [2024-04-27 00:10:19.303214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965630 is same with the state(5) to be set 00:25:49.124 [2024-04-27 00:10:19.303448] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x965630 (9): Bad file descriptor 00:25:49.124 [2024-04-27 00:10:19.303666] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.124 [2024-04-27 00:10:19.303674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.124 [2024-04-27 00:10:19.303681] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.124 [2024-04-27 00:10:19.307162] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.124 [2024-04-27 00:10:19.307963] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.124 00:10:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.124 00:10:19 -- host/bdevperf.sh@38 -- # wait 547112 00:25:49.124 [2024-04-27 00:10:19.315980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.384 [2024-04-27 00:10:19.487947] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:59.378 00:25:59.378 Latency(us) 00:25:59.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.378 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.378 Verification LBA range: start 0x0 length 0x4000 00:25:59.378 Nvme1n1 : 15.01 8374.99 32.71 10020.97 0.00 6933.40 778.24 17257.81 00:25:59.379 =================================================================================================================== 00:25:59.379 Total : 8374.99 32.71 10020.97 0.00 6933.40 778.24 17257.81 00:25:59.379 00:10:27 -- host/bdevperf.sh@39 -- # sync 00:25:59.379 00:10:27 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:59.379 00:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.379 00:10:27 -- common/autotest_common.sh@10 -- # set +x 00:25:59.379 00:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.379 00:10:27 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:59.379 00:10:27 -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:59.379 00:10:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:59.379 00:10:27 -- nvmf/common.sh@117 -- # sync 00:25:59.379 00:10:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:59.379 00:10:27 -- nvmf/common.sh@120 -- # set +e 00:25:59.379 00:10:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:59.379 00:10:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:59.379 rmmod nvme_tcp 00:25:59.379 rmmod nvme_fabrics 00:25:59.379 rmmod nvme_keyring 00:25:59.379 00:10:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:59.379 00:10:28 -- nvmf/common.sh@124 -- # set -e 00:25:59.379 00:10:28 -- nvmf/common.sh@125 -- # return 0 00:25:59.379 00:10:28 -- nvmf/common.sh@478 -- # '[' -n 548354 ']' 00:25:59.379 00:10:28 -- nvmf/common.sh@479 -- # killprocess 548354 00:25:59.379 00:10:28 -- common/autotest_common.sh@936 -- # '[' -z 548354 ']' 00:25:59.379 00:10:28 -- common/autotest_common.sh@940 -- # kill -0 548354 00:25:59.379 00:10:28 -- common/autotest_common.sh@941 -- # uname 00:25:59.379 00:10:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:59.379 00:10:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 548354 00:25:59.379 00:10:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:59.379 00:10:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:59.379 00:10:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 548354' 00:25:59.379 killing process with pid 548354 00:25:59.379 00:10:28 -- common/autotest_common.sh@955 -- # kill 548354 00:25:59.379 00:10:28 -- common/autotest_common.sh@960 -- # wait 548354 00:25:59.379 00:10:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:59.379 00:10:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:59.379 00:10:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:59.379 00:10:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:59.379 00:10:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:59.379 00:10:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.379 00:10:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.379 00:10:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.318 00:10:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:00.318 00:26:00.318 real 0m27.931s 00:26:00.318 user 1m3.389s 00:26:00.318 sys 0m7.084s 00:26:00.318 00:10:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:00.318 00:10:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.318 ************************************ 00:26:00.318 END TEST nvmf_bdevperf 00:26:00.318 ************************************ 00:26:00.318 00:10:30 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:00.318 00:10:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:00.318 00:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:00.318 00:10:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.318 ************************************ 00:26:00.318 START TEST nvmf_target_disconnect 00:26:00.318 ************************************ 00:26:00.318 00:10:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:00.578 * Looking for test storage... 00:26:00.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:00.578 00:10:30 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.578 00:10:30 -- nvmf/common.sh@7 -- # uname -s 00:26:00.578 00:10:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.578 00:10:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.578 00:10:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.578 00:10:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.578 00:10:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.578 00:10:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.578 00:10:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.578 00:10:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.578 00:10:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.578 00:10:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.579 00:10:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:00.579 00:10:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:00.579 00:10:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.579 00:10:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.579 00:10:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.579 00:10:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.579 00:10:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.579 00:10:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.579 00:10:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.579 00:10:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.579 00:10:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.579 00:10:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.579 00:10:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.579 00:10:30 -- paths/export.sh@5 -- # export PATH 00:26:00.579 00:10:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.579 00:10:30 -- nvmf/common.sh@47 -- # : 0 00:26:00.579 00:10:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:00.579 00:10:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:00.579 00:10:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.579 00:10:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.579 00:10:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.579 00:10:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:00.579 00:10:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:00.579 00:10:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:00.579 00:10:30 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:00.579 00:10:30 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:00.579 00:10:30 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:00.579 00:10:30 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:00.579 00:10:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:00.579 00:10:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.579 00:10:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:00.579 00:10:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:00.579 00:10:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:00.579 00:10:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.579 00:10:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.579 00:10:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.579 00:10:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:00.579 00:10:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:00.579 00:10:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:00.579 00:10:30 -- common/autotest_common.sh@10 -- # set +x 00:26:07.164 00:10:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:07.164 00:10:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:07.164 00:10:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:07.164 00:10:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:07.164 00:10:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:07.164 00:10:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:07.164 00:10:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:07.164 00:10:37 -- nvmf/common.sh@295 -- # net_devs=() 00:26:07.164 00:10:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:07.164 00:10:37 -- nvmf/common.sh@296 -- # e810=() 00:26:07.164 00:10:37 -- nvmf/common.sh@296 -- # local -ga e810 00:26:07.164 00:10:37 -- nvmf/common.sh@297 -- # x722=() 00:26:07.164 00:10:37 -- nvmf/common.sh@297 -- # local -ga x722 00:26:07.164 00:10:37 -- nvmf/common.sh@298 -- # mlx=() 00:26:07.164 00:10:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:07.164 00:10:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.164 00:10:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.164 00:10:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.164 00:10:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.164 00:10:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.164 00:10:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.164 00:10:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.164 00:10:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.164 00:10:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.164 00:10:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.164 00:10:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.164 00:10:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:07.164 00:10:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:07.164 00:10:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:07.164 00:10:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.164 00:10:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:07.164 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:07.164 00:10:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.164 00:10:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:07.164 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:07.164 00:10:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:07.164 00:10:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.164 00:10:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.164 00:10:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:07.164 00:10:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.164 00:10:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:07.164 Found net devices under 0000:31:00.0: cvl_0_0 00:26:07.164 00:10:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.164 00:10:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.164 00:10:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.164 00:10:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:07.164 00:10:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.164 00:10:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:07.164 Found net devices under 0000:31:00.1: cvl_0_1 00:26:07.164 00:10:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.164 00:10:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:07.164 00:10:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:07.164 00:10:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:07.164 00:10:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:07.164 00:10:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.164 00:10:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.164 00:10:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.164 00:10:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:07.164 00:10:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.164 00:10:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.164 00:10:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:07.164 00:10:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.164 00:10:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.164 00:10:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:07.424 00:10:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:07.424 00:10:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.424 00:10:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.424 00:10:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.425 00:10:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.425 00:10:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:07.425 00:10:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.732 00:10:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.732 00:10:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.732 00:10:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:07.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:26:07.732 00:26:07.732 --- 10.0.0.2 ping statistics --- 00:26:07.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.732 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:26:07.732 00:10:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:26:07.732 00:26:07.732 --- 10.0.0.1 ping statistics --- 00:26:07.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.732 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:26:07.732 00:10:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.732 00:10:37 -- nvmf/common.sh@411 -- # return 0 00:26:07.732 00:10:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:07.732 00:10:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.732 00:10:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:07.732 00:10:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:07.732 00:10:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.732 00:10:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:07.732 00:10:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:07.732 00:10:37 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:07.732 00:10:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:07.732 00:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:07.732 00:10:37 -- common/autotest_common.sh@10 -- # set +x 00:26:08.014 ************************************ 00:26:08.014 START TEST nvmf_target_disconnect_tc1 00:26:08.014 ************************************ 00:26:08.014 00:10:37 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:26:08.014 00:10:37 -- host/target_disconnect.sh@32 -- # set +e 00:26:08.014 00:10:37 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:08.014 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.014 [2024-04-27 00:10:38.026104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.014 [2024-04-27 00:10:38.026522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.014 [2024-04-27 00:10:38.026536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21375f0 with addr=10.0.0.2, port=4420 00:26:08.014 [2024-04-27 00:10:38.026567] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:08.014 [2024-04-27 00:10:38.026588] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:08.014 [2024-04-27 00:10:38.026596] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:08.014 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:08.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:08.014 Initializing NVMe Controllers 00:26:08.014 00:10:38 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:08.014 00:10:38 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:08.014 00:10:38 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:26:08.014 00:10:38 -- common/autotest_common.sh@1139 -- # return 0 00:26:08.014 00:10:38 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:08.014 00:10:38 -- host/target_disconnect.sh@41 -- # set -e 00:26:08.014 00:26:08.014 real 0m0.105s 00:26:08.014 user 0m0.046s 00:26:08.014 sys 0m0.058s 00:26:08.014 00:10:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:08.014 00:10:38 -- common/autotest_common.sh@10 -- # set +x 00:26:08.014 ************************************ 00:26:08.014 END TEST nvmf_target_disconnect_tc1 00:26:08.014 ************************************ 00:26:08.014 00:10:38 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:08.014 00:10:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:08.014 00:10:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:08.014 00:10:38 -- common/autotest_common.sh@10 -- # set +x 00:26:08.275 ************************************ 00:26:08.275 START TEST nvmf_target_disconnect_tc2 00:26:08.275 ************************************ 00:26:08.275 00:10:38 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:26:08.275 00:10:38 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:08.275 00:10:38 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:08.275 00:10:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:08.275 00:10:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:08.275 00:10:38 -- common/autotest_common.sh@10 -- # set +x 00:26:08.275 00:10:38 -- nvmf/common.sh@470 -- # nvmfpid=554567 00:26:08.275 00:10:38 -- nvmf/common.sh@471 -- # waitforlisten 554567 00:26:08.275 00:10:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:08.275 00:10:38 -- common/autotest_common.sh@817 -- # '[' -z 554567 ']' 00:26:08.275 00:10:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.275 00:10:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:08.275 00:10:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.275 00:10:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:08.275 00:10:38 -- common/autotest_common.sh@10 -- # set +x 00:26:08.275 [2024-04-27 00:10:38.305464] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:26:08.275 [2024-04-27 00:10:38.305538] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.275 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.275 [2024-04-27 00:10:38.390396] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:08.275 [2024-04-27 00:10:38.482959] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.275 [2024-04-27 00:10:38.483017] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.275 [2024-04-27 00:10:38.483025] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.275 [2024-04-27 00:10:38.483032] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.275 [2024-04-27 00:10:38.483038] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.275 [2024-04-27 00:10:38.483688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:08.275 [2024-04-27 00:10:38.483820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:08.275 [2024-04-27 00:10:38.483989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:08.275 [2024-04-27 00:10:38.484033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:09.215 00:10:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:09.215 00:10:39 -- common/autotest_common.sh@850 -- # return 0 00:26:09.215 00:10:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:09.215 00:10:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:09.215 00:10:39 -- common/autotest_common.sh@10 -- # set +x 00:26:09.215 00:10:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.215 00:10:39 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:09.215 00:10:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.215 00:10:39 -- common/autotest_common.sh@10 -- # set +x 00:26:09.215 Malloc0 00:26:09.215 00:10:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.215 00:10:39 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:09.215 00:10:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.215 00:10:39 -- common/autotest_common.sh@10 -- # set +x 00:26:09.215 [2024-04-27 00:10:39.166795] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.215 00:10:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.215 00:10:39 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:09.215 00:10:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.215 00:10:39 -- common/autotest_common.sh@10 -- # set +x 00:26:09.215 00:10:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.215 00:10:39 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:09.215 00:10:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.215 00:10:39 -- common/autotest_common.sh@10 -- # set +x 00:26:09.215 00:10:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.215 00:10:39 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.215 00:10:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.215 00:10:39 -- common/autotest_common.sh@10 -- # set +x 00:26:09.215 [2024-04-27 00:10:39.207106] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.215 00:10:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.216 00:10:39 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:09.216 00:10:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.216 00:10:39 -- common/autotest_common.sh@10 -- # set +x 00:26:09.216 00:10:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.216 00:10:39 -- host/target_disconnect.sh@50 -- # reconnectpid=554621 00:26:09.216 00:10:39 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:09.216 00:10:39 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:09.216 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.133 00:10:41 -- host/target_disconnect.sh@53 -- # kill -9 554567 00:26:11.133 00:10:41 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Write completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 Read completed with error (sct=0, sc=8) 00:26:11.133 starting I/O failed 00:26:11.133 [2024-04-27 00:10:41.239320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:11.133 [2024-04-27 00:10:41.239749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.240225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.240260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.133 qpair failed and we were unable to recover it. 00:26:11.133 [2024-04-27 00:10:41.240526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.241061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.241096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.133 qpair failed and we were unable to recover it. 00:26:11.133 [2024-04-27 00:10:41.241378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.241661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.241672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.133 qpair failed and we were unable to recover it. 00:26:11.133 [2024-04-27 00:10:41.242152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.242509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.242522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.133 qpair failed and we were unable to recover it. 00:26:11.133 [2024-04-27 00:10:41.243082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.243433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.243446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.133 qpair failed and we were unable to recover it. 00:26:11.133 [2024-04-27 00:10:41.243813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.244036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.244047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.133 qpair failed and we were unable to recover it. 00:26:11.133 [2024-04-27 00:10:41.244364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.244693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.133 [2024-04-27 00:10:41.244703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.133 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.244905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.245135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.245146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.245453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.245725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.245738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.245807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.246168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.246179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.246504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.246608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.246618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.246955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.247204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.247214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.247523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.247848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.247859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.248157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.248479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.248489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.248855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.249194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.249204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.249557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.249886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.249896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.250247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.250571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.250581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.250923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.251023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.251033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.251289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.251509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.251522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.251900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.252224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.252235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.252455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.252699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.252708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.252914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.253193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.253202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.253405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.253709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.253719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.254116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.254462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.254471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.254686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.255030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.255040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.255419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.255768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.255778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.256030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.256215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.256223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.256541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.256913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.256922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.257030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.257373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.257381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.257707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.257974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.257984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.258332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.258694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.258703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.259095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.259446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.259455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.259699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.259998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.260007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.134 qpair failed and we were unable to recover it. 00:26:11.134 [2024-04-27 00:10:41.260404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.134 [2024-04-27 00:10:41.260710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.260719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.260984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.261347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.261356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.261550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.261861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.261871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.262211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.262580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.262588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.262790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.263079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.263089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.263401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.263745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.263754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.263923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.264143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.264151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.264483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.264869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.264881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.265194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.265560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.265571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.265884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.266243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.266254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.266581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.266943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.266955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.267324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.267642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.267653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.267857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.268163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.268176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.268531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.268975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.268986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.269161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.269453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.269464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.269670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.269870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.269882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.270118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.270480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.270492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.270817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.271170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.271182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.271517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.271871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.271883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.272214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.272512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.272523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.272852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.273224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.273235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.273430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.273757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.273769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.274021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.274424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.274435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.274748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.274994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.275006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.275331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.275631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.275642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.276016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.276364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.276379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.276622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.276746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.135 [2024-04-27 00:10:41.276761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.135 qpair failed and we were unable to recover it. 00:26:11.135 [2024-04-27 00:10:41.276994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.277195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.277210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.277612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.277975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.277991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.278357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.278708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.278723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.279030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.279363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.279378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.279713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.280103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.280119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.280405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.280767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.280782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.281111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.281472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.281487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.281817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.282164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.282179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.282537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.282882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.282898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.283281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.283639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.283654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.284002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.284347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.284363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.284712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.285065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.285081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.285282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.285642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.285657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.285998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.286344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.286359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.286701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.286956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.286972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.287353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.287563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.287579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.287888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.288113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.288130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.288331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.288656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.288674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.289006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.289362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.289380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.289738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.290074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.290095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.290478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.290813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.290831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.291197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.291558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.291578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.292008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.292222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.292242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.292665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.293019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.293039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.293397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.293720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.293739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.294086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.294412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.294431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.294807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.295161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.136 [2024-04-27 00:10:41.295182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.136 qpair failed and we were unable to recover it. 00:26:11.136 [2024-04-27 00:10:41.295404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.295744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.295763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.296105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.296461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.296480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.296741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.296977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.296997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.297363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.297671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.297689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.298019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.298376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.298395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.298773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.299130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.299150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.299486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.299798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.299816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.300178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.300541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.300561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.300880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.301260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.301279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.301609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.301944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.301971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.302321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.302665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.302691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.303045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.303401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.303427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.303824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.304193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.304220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.304529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.304876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.304904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.305233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.305612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.305638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.306024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.306342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.306368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.306745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.307094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.307122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.307441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.307790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.307816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.308194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.308554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.308580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.308946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.309300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.309327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.309680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.310072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.310099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.310467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.310822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.310857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.311220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.311464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.311494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.311873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.312129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.312154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.312404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.312792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.312819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.312995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.313342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.313369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.137 [2024-04-27 00:10:41.313712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.314078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.137 [2024-04-27 00:10:41.314106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.137 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.314463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.314854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.314882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.315011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.315359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.315386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.315769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.316129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.316156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.316533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.316858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.316886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.317256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.317554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.317590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.317961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.318328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.318354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.318620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.318982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.319009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.319391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.319754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.319780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.320141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.320386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.320413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.320758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.321159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.321186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.321566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.321914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.321942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.322279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.322643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.322669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.322919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.323300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.323327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.323679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.323964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.323991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.324385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.324796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.324823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.325193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.325555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.325583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.325961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.326318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.326344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.326780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.327144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.327171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.327508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.327755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.327784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.328223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.328557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.328583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.328978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.329364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.329390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.138 [2024-04-27 00:10:41.329718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.330080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.138 [2024-04-27 00:10:41.330108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.138 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.330466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.330817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.330860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.331224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.331584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.331611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.331980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.332300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.332327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.332686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.333026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.333053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.333424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.333808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.333834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.334238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.334623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.334649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.334983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.335355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.335381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.335756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.335981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.336011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.336273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.336611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.336637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.337000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.337389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.337415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.337781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.338122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.338149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.338529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.338889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.338917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.339289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.339656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.339682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.339937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.340338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.340371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.340627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.340875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.340905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.341241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.341576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.341602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.341968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.342310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.342335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.342713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.343133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.343161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.343548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.343933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.343961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.344352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.344713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.344739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.345099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.345462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.345489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.345868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.346254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.346280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.346636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.346984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.347011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.347383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.347735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.347767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.348094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.348474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.348501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.348764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.349109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.139 [2024-04-27 00:10:41.349136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.139 qpair failed and we were unable to recover it. 00:26:11.139 [2024-04-27 00:10:41.349402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.349764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.349791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.408 qpair failed and we were unable to recover it. 00:26:11.408 [2024-04-27 00:10:41.350186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.350522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.350549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.408 qpair failed and we were unable to recover it. 00:26:11.408 [2024-04-27 00:10:41.350896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.351286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.351312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.408 qpair failed and we were unable to recover it. 00:26:11.408 [2024-04-27 00:10:41.351686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.352040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.352066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.408 qpair failed and we were unable to recover it. 00:26:11.408 [2024-04-27 00:10:41.352441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.352772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.352798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.408 qpair failed and we were unable to recover it. 00:26:11.408 [2024-04-27 00:10:41.353154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.353522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.353548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.408 qpair failed and we were unable to recover it. 00:26:11.408 [2024-04-27 00:10:41.353924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.354270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.354295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.408 qpair failed and we were unable to recover it. 00:26:11.408 [2024-04-27 00:10:41.354662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.355040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.355073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.408 qpair failed and we were unable to recover it. 00:26:11.408 [2024-04-27 00:10:41.355339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.355700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.355727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.408 qpair failed and we were unable to recover it. 00:26:11.408 [2024-04-27 00:10:41.356103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.356354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.356384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.408 qpair failed and we were unable to recover it. 00:26:11.408 [2024-04-27 00:10:41.356760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.356998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.357027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.408 qpair failed and we were unable to recover it. 00:26:11.408 [2024-04-27 00:10:41.357382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.408 [2024-04-27 00:10:41.357729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.357754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.358139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.358505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.358531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.358908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.359280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.359306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.359684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.360039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.360067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.360382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.360766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.360792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.361192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.361488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.361514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.361889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.362256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.362288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.362651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.363015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.363043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.363295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.363522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.363548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.363931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.364294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.364320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.364659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.364973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.365000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.365346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.365703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.365729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.366140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.366482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.366508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.366884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.367248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.367274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.367648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.367993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.368021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.368392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.368775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.368801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.369225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.369605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.369631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.369971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.370378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.370404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.370755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.371094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.371121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.371552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.371923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.371951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.372312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.372676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.372702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.409 qpair failed and we were unable to recover it. 00:26:11.409 [2024-04-27 00:10:41.373086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.409 [2024-04-27 00:10:41.373426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.373453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.373832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.374252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.374279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.374629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.374860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.374888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.375248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.375582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.375609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.375956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.376353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.376379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.376763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.377070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.377097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.377350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.377748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.377774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.378131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.378519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.378545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.378891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.379254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.379281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.379618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.379983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.380012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.380348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.380692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.380718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.381138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.381381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.381417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.381787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.382158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.382185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.382574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.382934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.382961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.383333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.383678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.383704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.384061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.384410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.384436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.384678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.385043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.385071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.385434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.385799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.385825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.386183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.386514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.386540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.386825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.387089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.387118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.387468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.387864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.387892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.410 qpair failed and we were unable to recover it. 00:26:11.410 [2024-04-27 00:10:41.388225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.410 [2024-04-27 00:10:41.388556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.388582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.388983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.389228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.389253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.389635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.389981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.390009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.390408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.390799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.390825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.391195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.391559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.391586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.391937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.392301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.392327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.392722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.393061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.393088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.393469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.393887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.393915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.394159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.394525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.394551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.394896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.395283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.395309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.395678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.396043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.396070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.396429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.396669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.396697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.397052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.397408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.397434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.397789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.398122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.398149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.398546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.398913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.398962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.399355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.399720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.399746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.400095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.400455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.400481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.400862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.401134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.401163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.401516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.401866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.401893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.402320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.402644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.402670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.403007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.403381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.403407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.411 qpair failed and we were unable to recover it. 00:26:11.411 [2024-04-27 00:10:41.403784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.411 [2024-04-27 00:10:41.404142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.404169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.404404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.404757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.404783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.405153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.405521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.405547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.405913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.406217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.406245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.406627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.406991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.407019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.407396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.407743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.407769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.408145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.408506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.408533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.408862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.409113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.409139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.409519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.409897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.409924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.410278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.410668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.410694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.411056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.411417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.411444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.411796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.412264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.412291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.412655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.413005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.413034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.413393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.413731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.413756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.414151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.414502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.414528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.414901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.415283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.415309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.415695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.416028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.416055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.416285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.416635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.416661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.417095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.417453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.417480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.417859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.418254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.418280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.418662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.419055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.419082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.412 qpair failed and we were unable to recover it. 00:26:11.412 [2024-04-27 00:10:41.419464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.412 [2024-04-27 00:10:41.419788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.419813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.420212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.420566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.420592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.420854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.421117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.421145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.421499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.421870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.421898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.422266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.422644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.422669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.423065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.423449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.423474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.423826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.424072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.424099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.424466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.424877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.424904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.425313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.425675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.425701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.425961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.426210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.426238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.426609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.426955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.426983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.427344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.427704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.427730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.428106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.428471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.428497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.428824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.429224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.429251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.429634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.430011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.430038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.430409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.430757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.430783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.431113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.431450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.431477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.431830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.432220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.432246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.432633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.432981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.433009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.433396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.433751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.433777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.434124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.434486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.434512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.413 [2024-04-27 00:10:41.434888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.435261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.413 [2024-04-27 00:10:41.435287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.413 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.435654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.435900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.435928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.436312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.436677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.436703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.436977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.437350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.437377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.437738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.438095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.438122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.438386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.438549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.438578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.439025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.439388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.439414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.439785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.440113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.440141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.440500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.440854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.440882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.441159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.441492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.441518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.441897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.442287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.442313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.442750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.443123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.443150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.443530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.443923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.443950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.444195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.444553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.444580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.444931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.445295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.445322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.445590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.445879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.445906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.446275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.446656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.414 [2024-04-27 00:10:41.446682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.414 qpair failed and we were unable to recover it. 00:26:11.414 [2024-04-27 00:10:41.446925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.447192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.447222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.447596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.448054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.448082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.448455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.448818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.448851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.449200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.449566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.449592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.449982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.450218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.450246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.450614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.450993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.451021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.451390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.451776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.451802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.452171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.452554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.452580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.452956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.453329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.453355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.453515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.453899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.453926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.454158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.454505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.454531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.454876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.455197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.455223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.455481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.455848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.455876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.456261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.456612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.456638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.457022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.457357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.457384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.457597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.457943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.457975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.458363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.458711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.458737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.459002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.459373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.459399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.459760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.460084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.460112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.460490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.460834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.460870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.461152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.461485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.461511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.415 qpair failed and we were unable to recover it. 00:26:11.415 [2024-04-27 00:10:41.461884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.415 [2024-04-27 00:10:41.462226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.462252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.462603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.462992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.463019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.463273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.463637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.463662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.464008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.464397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.464423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.464759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.465116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.465150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.465525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.465821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.465869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.466269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.466605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.466630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.467006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.467386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.467412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.467717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.468086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.468114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.468489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.468877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.468904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.469267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.469629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.469655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.470012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.470266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.470291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.470692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.471043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.471070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.471443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.471806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.471832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.472087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.472489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.472521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.472947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.473320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.473346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.473726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.474133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.474160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.474536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.474781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.474807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.475077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.475407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.475433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.475813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.476143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.476171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.476514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.476878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.476904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.416 qpair failed and we were unable to recover it. 00:26:11.416 [2024-04-27 00:10:41.477280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.416 [2024-04-27 00:10:41.477598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.477625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.478001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.478399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.478426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.478799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.479178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.479205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.479580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.479881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.479915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.480292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.480539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.480568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.480947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.481312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.481340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.481686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.482044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.482072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.482444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.482826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.482861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.483213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.483575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.483601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.483978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.484372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.484398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.484765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.485032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.485059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.485415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.485778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.485804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.486184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.486519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.486546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.486928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.487276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.487302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.487656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.488047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.488075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.488455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.488869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.488897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.489289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.489679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.489706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.490082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.490445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.490472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.490748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.491115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.491142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.491548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.491811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.491846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.492231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.492571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.492596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.492975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.493365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.493391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.417 [2024-04-27 00:10:41.493671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.493966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.417 [2024-04-27 00:10:41.493993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.417 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.494256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.494615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.494640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.495012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.495401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.495427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.495846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.496214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.496240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.496617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.496952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.496979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.497254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.497649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.497676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.498034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.498296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.498323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.498660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.499017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.499044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.499434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.499891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.499918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.500292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.500655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.500681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.500937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.501310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.501335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.501743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.501956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.501982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.502348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.502773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.502799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.503160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.503453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.503478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.503851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.504134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.504160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.504510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.504864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.504892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.505248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.505597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.505623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.506005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.506357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.506383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.506781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.507142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.507168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.507554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.507789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.507814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.508085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.508441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.508468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.508687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.509008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.509036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.509297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.509609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.509636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.418 [2024-04-27 00:10:41.509986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.510377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.418 [2024-04-27 00:10:41.510403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.418 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.510806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.511227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.511253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.511457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.511780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.511806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.512169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.512519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.512545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.512892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.513253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.513280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.513542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.513905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.513933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.514301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.514548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.514575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.514926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.515263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.515291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.515658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.515978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.516005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.516287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.516488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.516514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.516902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.517295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.517321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.517548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.517817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.517866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.518124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.518490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.518516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.518862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.519187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.519212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.519591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.519984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.520012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.520407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.520770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.520797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.521203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.521557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.521583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.521981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.522383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.522409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.522793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.523152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.523181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.523552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.523787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.523812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.524089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.524375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.524401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.524789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.525201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.525229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.525514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.525880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.525907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.419 qpair failed and we were unable to recover it. 00:26:11.419 [2024-04-27 00:10:41.526272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.526493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.419 [2024-04-27 00:10:41.526521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.526785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.527122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.527150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.527513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.527920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.527947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.528282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.528644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.528670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.529044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.529427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.529453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.529814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.529928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.529957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.530340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.530724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.530750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.530974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.531359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.531385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.531741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.531986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.532013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.532301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.532668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.532694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.532940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.533324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.533351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.533711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.533978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.534005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.534416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.534693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.534726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.535090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.535459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.535486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.535727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.536082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.536110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.536439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.536805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.536833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.537198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.537447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.537474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.537820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.538218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.538245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.538500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.538826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.538867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.539291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.539642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.539669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.420 qpair failed and we were unable to recover it. 00:26:11.420 [2024-04-27 00:10:41.540029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.420 [2024-04-27 00:10:41.540280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.540309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.540681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.540996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.541024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.541253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.541588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.541614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.541996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.542365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.542391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.542747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.543098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.543124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.543489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.543875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.543904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.544164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.544532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.544559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.544882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.545279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.545305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.545672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.546043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.546069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.546452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.546691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.546718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.546999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.547279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.547306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.547699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.548043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.548070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.548511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.548855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.548882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.549283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.549646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.549672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.550120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.550474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.550500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.550891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.551289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.551316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.551664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.552056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.552083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.552461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.552811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.552845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.553086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.553325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.553354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.553753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.554106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.554135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.554541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.554888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.554915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.555202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.555566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.555592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.555975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.556316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.556343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.556606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.556814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.556845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.557246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.557603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.557629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.558043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.558404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.558430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.558811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.559164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.559191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.559560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.559910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.559937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.560274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.560537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.560565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.421 [2024-04-27 00:10:41.560923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.561288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.421 [2024-04-27 00:10:41.561313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.421 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.561695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.562044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.562070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.562438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.562785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.562812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.563089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.563476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.563503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.563895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.564250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.564276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.564555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.564909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.564936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.565211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.565578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.565603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.565813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.566220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.566249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.566578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.566932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.566959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.567327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.567696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.567722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.568098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.568348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.568374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.568704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.569049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.569076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.569431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.569794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.569820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.570213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.570496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.570522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.570793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.570959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.570990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.571364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.571688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.571714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.572098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.572444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.572471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.572725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.573089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.573123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.573457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.573794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.573821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.574203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.574554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.574581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.574963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.575329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.575356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.575711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.576049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.576076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.576465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.576825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.576862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.577268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.577516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.577546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.577926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.578288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.578315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.578694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.579092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.579119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.579368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.579731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.579758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.580157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.580499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.580531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.580924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.581316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.581342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.581720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.582084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.582112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.582474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.582816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.582851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.583234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.583571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.583597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.583966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.584362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.584388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.584733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.584990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.585019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.422 qpair failed and we were unable to recover it. 00:26:11.422 [2024-04-27 00:10:41.585375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.422 [2024-04-27 00:10:41.585623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.585652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.586040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.586402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.586428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.586792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.587154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.587182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.587614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.587954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.587987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.588178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.588600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.588625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.588978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.589371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.589398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.589782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.590144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.590172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.590550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.590918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.590945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.591287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.591642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.591668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.592020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.592373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.592400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.592783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.593038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.593065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.593453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.593815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.593848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.594225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.594565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.594591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.594953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.595342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.595368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.595773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.596140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.596168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.596528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.596889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.596916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.597239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.597566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.597592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.597985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.598377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.598403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.598782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.599040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.599071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.599450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.599821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.599868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.600239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.600579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.600605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.600987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.601372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.601398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.601786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.602120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.602148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.602520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.602908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.602936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.603323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.603682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.603709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.604068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.604434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.604461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.604658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.604942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.604969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.605333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.605716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.605742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.606103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.606466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.606493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.606877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.607293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.607319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.607666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.607945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.607973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.608329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.608563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.608591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.608981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.609331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.609358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.609708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.610046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.610074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.423 qpair failed and we were unable to recover it. 00:26:11.423 [2024-04-27 00:10:41.610447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.423 [2024-04-27 00:10:41.610828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.610863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.611238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.611604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.611630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.611995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.612386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.612413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.612775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.613122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.613150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.613485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.613893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.613920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.614260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.614615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.614641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.614999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.615372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.615398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.615846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.616223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.616250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.616608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.617000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.617027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.617369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.617734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.617761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.618114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.618481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.618507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.618901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.619263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.424 [2024-04-27 00:10:41.619289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.424 qpair failed and we were unable to recover it. 00:26:11.424 [2024-04-27 00:10:41.619672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.693 [2024-04-27 00:10:41.620125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.693 [2024-04-27 00:10:41.620155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.693 qpair failed and we were unable to recover it. 00:26:11.693 [2024-04-27 00:10:41.620413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.693 [2024-04-27 00:10:41.620766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.693 [2024-04-27 00:10:41.620792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.693 qpair failed and we were unable to recover it. 00:26:11.693 [2024-04-27 00:10:41.621169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.693 [2024-04-27 00:10:41.621549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.693 [2024-04-27 00:10:41.621575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.693 qpair failed and we were unable to recover it. 00:26:11.693 [2024-04-27 00:10:41.621867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.622220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.622246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.622604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.622989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.623017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.623396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.623743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.623769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.624145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.624464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.624489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.624875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.625256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.625282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.625534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.625895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.625924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.626265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.626647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.626673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.627034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.627410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.627436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.627816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.628156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.628185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.628556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.629016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.629044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.629416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.629787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.629813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.630213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.630567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.630593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.630833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.631102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.631128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.631400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.631802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.631828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.632212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.632440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.632468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.632719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.633038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.633065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.633416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.633830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.633865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.634238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.634475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.634501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.634884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.635241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.635267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.635669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.636023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.636051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.636377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.636765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.636792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.637130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.637490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.637517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.637874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.638244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.638271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.638619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.638985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.639013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.639382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.639768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.639794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.640173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.640506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.694 [2024-04-27 00:10:41.640532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.694 qpair failed and we were unable to recover it. 00:26:11.694 [2024-04-27 00:10:41.640893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.641146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.641175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.641575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.641857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.641887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.642247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.642648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.642674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.643058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.643425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.643451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.643801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.644137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.644164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.644518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.644755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.644780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.645130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.645495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.645521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.645894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.646268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.646295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.646664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.646923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.646952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.647339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.647714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.647741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.648116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.648474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.648500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.648891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.649126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.649152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.649478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.649710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.649738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.650157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.650524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.650551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.650937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.651311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.651338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.651719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.651955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.651983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.652394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.652737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.652763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.653121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.653356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.653385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.653774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.654116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.654143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.654409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.654777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.654804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.655197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.655560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.655587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.655950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.656205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.656232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.656608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.656948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.656976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.695 qpair failed and we were unable to recover it. 00:26:11.695 [2024-04-27 00:10:41.657351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.657601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.695 [2024-04-27 00:10:41.657630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.657893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.658271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.658297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.658671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.659080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.659107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.659504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.659870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.659897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.660303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.660648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.660674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.661057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.661406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.661433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.661812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.662156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.662184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.662556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.662924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.662952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.663347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.663739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.663764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.664205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.664434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.664462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.664853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.665149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.665176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.665549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.665798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.665827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.666126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.666521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.666548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.666929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.667300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.667327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.667685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.668086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.668115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.668483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.668874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.668901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.669272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.669613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.669640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.669993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.670357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.670384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.670780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.671158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.671186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.671551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.671850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.671878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.672248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.672591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.672617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.672891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.673193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.673218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.673601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.673855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.673885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.674249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.674616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.674642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.675022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.675378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.675404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.675783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.676126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.676155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.696 qpair failed and we were unable to recover it. 00:26:11.696 [2024-04-27 00:10:41.676498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.696 [2024-04-27 00:10:41.676867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.676895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.677258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.677617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.677644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.678082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.678426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.678452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.678692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.679018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.679046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.679429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.679772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.679798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.680189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.680584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.680611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.680969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.681359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.681385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.681735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.682129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.682156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.682520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.682891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.682918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.683274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.683641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.683667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.684016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.684381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.684412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.684845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.685207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.685233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.685628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.686005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.686033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.686416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.686773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.686800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.687134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.687519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.687545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.687904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.688140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.688167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.688578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.688829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.688866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.689247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.689578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.689604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.689973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.690357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.690383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.690779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.691143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.691171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.691551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.691793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.691824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.691964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.692378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.692405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.692656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.693043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.693071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.697 [2024-04-27 00:10:41.693451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.693778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.697 [2024-04-27 00:10:41.693804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.697 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.694226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.694592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.694618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.695000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.695349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.695376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.695737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.696076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.696104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.696444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.696828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.696863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.697236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.697572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.697598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.697961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.698367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.698393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.698749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.699125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.699159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.699531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.699924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.699951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.700330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.700659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.700685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.701049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.701412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.701439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.701812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.702228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.702255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.702635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.703005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.703032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.703399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.703717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.703743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.704000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.704368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.704394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.704779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.705152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.705180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.705560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.705926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.705954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.706232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.706604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.706642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.706991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.707387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.707413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.707790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.708154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.708181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.708535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.708887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.708914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.709278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.709625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.709651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.710057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.710429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.710455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.710817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.711207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.711236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.711589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.711923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.711951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.712343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.712733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.712761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.698 qpair failed and we were unable to recover it. 00:26:11.698 [2024-04-27 00:10:41.713107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.713473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.698 [2024-04-27 00:10:41.713499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.713856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.714185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.714213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.714600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.714875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.714905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.715301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.715670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.715697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.716056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.716429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.716456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.716891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.717146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.717176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.717558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.717917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.717944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.718386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.718627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.718656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.719039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.719396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.719423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.719787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.720182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.720210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.720615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.720989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.721016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.721413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.721755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.721781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.722074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.722446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.722472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.722832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.723229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.723256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.723633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.723999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.724028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.724279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.724604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.724631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.724993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.725365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.725392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.725657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.726034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.726061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.726420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.726812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.726852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.727188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.727515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.727541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.727904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.728296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.728322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.728723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.729094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.729121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.729484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.729858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.699 [2024-04-27 00:10:41.729886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.699 qpair failed and we were unable to recover it. 00:26:11.699 [2024-04-27 00:10:41.730124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.730525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.730551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.730920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.731292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.731318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.731671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.732040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.732068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.732427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.732815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.732849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.733236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.733578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.733604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.734021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.734276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.734302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.734638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.735012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.735040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.735296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.735623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.735650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.736016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.736414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.736440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.736791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.737202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.737230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.737492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.737821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.737855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.738161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.738503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.738529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.738797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.739180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.739208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.739569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.739939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.739967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.740364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.740740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.740766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.741176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.741542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.741569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.741977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.742353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.742380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.742674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.742918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.742946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.743220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.743555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.743581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.743865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.744223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.744250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.744625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.744876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.744903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.745305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.745653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.745679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.746109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.746476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.746502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.746876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.747163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.747190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.747446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.747825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.747862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.748270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.748632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.748658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.749028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.749370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.700 [2024-04-27 00:10:41.749397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.700 qpair failed and we were unable to recover it. 00:26:11.700 [2024-04-27 00:10:41.749769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.750139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.750166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.750400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.750778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.750804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.751176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.751559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.751585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.751875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.752251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.752277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.752530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.752893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.752922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.753133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.753514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.753541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.753933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.754308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.754334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.754585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.754733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.754765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.754987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.755323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.755350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.755741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.756090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.756119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.756380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.756672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.756699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.757091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.757462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.757488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.757942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.758082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.758107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.758489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.758864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.758891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.759231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.759506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.759532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.759921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.760282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.760308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.760680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.761039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.761067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.761445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.761823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.761859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.762220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.762467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.762493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.762873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.763253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.763279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.763678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.764060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.764087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.764357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.764703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.764730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.765139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.765414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.765439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.765821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.766126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.766154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.766518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.766747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.766773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.767135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.767511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.767537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.701 qpair failed and we were unable to recover it. 00:26:11.701 [2024-04-27 00:10:41.767797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.768041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.701 [2024-04-27 00:10:41.768069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.768448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.768854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.768883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.769299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.769669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.769696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.769945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.770196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.770223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.770555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.770929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.770957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.771231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.771567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.771593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.771958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.772176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.772204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.772533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.772939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.772966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.773111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.773483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.773510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.773898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.774140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.774165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.774461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.774830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.774869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.775230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.775605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.775631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.776019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.776363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.776390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.776781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.777051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.777079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.777426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.777675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.777703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.778055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.778305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.778331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.778690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.779053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.779082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.779449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.779805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.779832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.780018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.780350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.780377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.780617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.780859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.780888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.781253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.781610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.781637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.782044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.782268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.782294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.782638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.782992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.783020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.783362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.783760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.783786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.784236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.784495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.784525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.784927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.785229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.785257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.785680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.786048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.786077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.702 qpair failed and we were unable to recover it. 00:26:11.702 [2024-04-27 00:10:41.786463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.702 [2024-04-27 00:10:41.786827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.786863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.787246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.787605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.787630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.788024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.788369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.788395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.788761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.789014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.789044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.789433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.789863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.789891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.790264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.790569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.790594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.790946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.791291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.791317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.791683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.792073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.792100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.792463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.792824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.792859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.793259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.793626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.793653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.794066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.794457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.794483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.794881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.795222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.795249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.795504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.795776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.795803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.796197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.796455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.796482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.796882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.797250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.797276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.797656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.798007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.798035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.798375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.798769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.798795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.799241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.799534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.799560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.799954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.800248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.800276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.800621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.800978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.801011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.801384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.801771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.801798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.802155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.802553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.802580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.802969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.803296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.803322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.803695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.804050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.804078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.703 qpair failed and we were unable to recover it. 00:26:11.703 [2024-04-27 00:10:41.804499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.804874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.703 [2024-04-27 00:10:41.804902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.805164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.805538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.805565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.805930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.806274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.806302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.806681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.807051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.807080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.807439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.807834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.807877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.808291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.808673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.808705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.809089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.809345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.809375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.809806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.810175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.810203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.810556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.810898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.810925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.811317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.811736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.811762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.812174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.812539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.812566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.812956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.813314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.813341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.813722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.814113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.814141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.814516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.814774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.814799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.815226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.815586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.815612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.815980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.816369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.816407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.816681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.817049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.817076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.817455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.817852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.817880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.818284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.818658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.818685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.819073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.819422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.819448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.819809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.820239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.820267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.820686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.821034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.821063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.821397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.821798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.821824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.822188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.822585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.822611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.823062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.823307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.823337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.704 qpair failed and we were unable to recover it. 00:26:11.704 [2024-04-27 00:10:41.823716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.824093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.704 [2024-04-27 00:10:41.824127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.824502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.824861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.824889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.825294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.825644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.825671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.826061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.826316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.826345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.826609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.826979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.827006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.827367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.827761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.827787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.828161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.828557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.828584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.828897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.829303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.829330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.829719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.830061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.830088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.830438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.830818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.830851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.831207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.831565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.831593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.831980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.832357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.832383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.832623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.833002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.833030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.833402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.833775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.833802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.834192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.834582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.834609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.834981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.835347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.835373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.835780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.836125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.836153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.836543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.836890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.836918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.837179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.837415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.837441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.837752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.838120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.838148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.838511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.838877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.838905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.839316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.839617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.839645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.839962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.840312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.840338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.840723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.841081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.841109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.841546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.841874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.841902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.842283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.842655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.842682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.843070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.843467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.705 [2024-04-27 00:10:41.843493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.705 qpair failed and we were unable to recover it. 00:26:11.705 [2024-04-27 00:10:41.843865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.844295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.844321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.844718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.845015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.845042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.845441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.845799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.845825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.846261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.846609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.846635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.847027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.847258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.847283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.847674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.848047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.848074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.848436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.848672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.848700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.848978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.849391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.849417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.849789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.850128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.850156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.850538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.850910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.850938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.851343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.851748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.851774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.852209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.852567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.852593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.852892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.853266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.853292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.853659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.854053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.854080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.854489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.854870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.854898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.855182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.855487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.855512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.855975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.856378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.856405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.856771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.857155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.857183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.857569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.857907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.857935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.858199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.858582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.858609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.858994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.859305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.859333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.859601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.860024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.860051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.860417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.860670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.860697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.861054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.861441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.861467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.861860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.862224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.862251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.862612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.862987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.863017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.863403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.863751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.863777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.864161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.864557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.706 [2024-04-27 00:10:41.864583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.706 qpair failed and we were unable to recover it. 00:26:11.706 [2024-04-27 00:10:41.865015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.865376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.865403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.865813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.866206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.866234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.866601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.867000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.867028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.867352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.867714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.867740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.868145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.868509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.868535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.868909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.869296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.869322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.869707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.869965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.869994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.870325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.870766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.870793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.871182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.871540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.871567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.871825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.872213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.872239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.872607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.872984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.873013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.873369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.873806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.873832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.874188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.874548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.874574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.874981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.875363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.875389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.875754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.876159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.876187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.876578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.876958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.876986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.877295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.877638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.877664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.878034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.878418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.878446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.878830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.879087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.879117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.879489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.879744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.879771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.880126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.880482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.880510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.880775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.881045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.881075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.881484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.881862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.881890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.882303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.882653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.882680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.883090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.883484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.883510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.883922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.884366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.884392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.884713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.885074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.885102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.885475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.885876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.885904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.886297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.886680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.707 [2024-04-27 00:10:41.886706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.707 qpair failed and we were unable to recover it. 00:26:11.707 [2024-04-27 00:10:41.887091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.887454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.887480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.887754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.888002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.888031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.888425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.888869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.888899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.889263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.889627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.889654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.889898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.890274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.890301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.890679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.891052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.891080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.891493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.891892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.891920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.892298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.892685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.892712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.892996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.893373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.893400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.893666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.893969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.893997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.894367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.894658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.894684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.895102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.895348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.895374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.895752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.896108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.896136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.896536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.896958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.896986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.897261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.897502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.897530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.897874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.898263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.898290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.898693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.899056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.899084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.899428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.899805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.899831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.900204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.900445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.900474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.900872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.901243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.901270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.901636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.901977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.902005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.902384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.902769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.902796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.903183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.903538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.903565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.903820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.904259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.904287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.904670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.905044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.905073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.708 qpair failed and we were unable to recover it. 00:26:11.708 [2024-04-27 00:10:41.905465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.708 [2024-04-27 00:10:41.905813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.709 [2024-04-27 00:10:41.905860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-04-27 00:10:41.906286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.906746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.906772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-04-27 00:10:41.907031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.907415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.907443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-04-27 00:10:41.907830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.908218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.908245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-04-27 00:10:41.908638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.909040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.909069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-04-27 00:10:41.909337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.909712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.909738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.978 [2024-04-27 00:10:41.910123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.910489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.978 [2024-04-27 00:10:41.910515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.978 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.910909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.911239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.911275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.911668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.912044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.912072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.912332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.912678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.912704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.913081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.913448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.913474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.913835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.914235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.914262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.914651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.915045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.915074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.915462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.915830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.915867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.916235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.916613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.916639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.917032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.917406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.917433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.917691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.918042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.918069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.918441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.918822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.918867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.919267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.919617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.919645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.920066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.920434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.920460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.920735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.921098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.921125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.921555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.921947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.921974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.922362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.922728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.922761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.923187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.923581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.923607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.924015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.924273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.924301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.924726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.925035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.925063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.925447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.925794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.925820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.926113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.926500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.926526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.926912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.927251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.927278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.927524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.927877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.927906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.928252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.928609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.928636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.929084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.929470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.929496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.929889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.930330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.930371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.979 [2024-04-27 00:10:41.930761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.931120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.979 [2024-04-27 00:10:41.931149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.979 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.931528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.931931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.931959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.932447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.932804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.932831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.933267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.933648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.933676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.934073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.934436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.934463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.934869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.935232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.935259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.935668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.935996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.936024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.936419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.936752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.936778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.937186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.937577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.937604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.937991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.938348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.938381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.938739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.939127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.939156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.939527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.939934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.939962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.940346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.940675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.940701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.940948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.941340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.941367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.941639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.942016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.942044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.942297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.942686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.942713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.943096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.943466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.943492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.943873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.944235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.944261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.944664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.944917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.944946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.945264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.945625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.945650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.946081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.946432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.946460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.946809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.947213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.947242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.947637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.947995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.948024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.948484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.948880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.948908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.949194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.949580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.949607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.949994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.950363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.950389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.950789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.951192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.980 [2024-04-27 00:10:41.951219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.980 qpair failed and we were unable to recover it. 00:26:11.980 [2024-04-27 00:10:41.951631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.952047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.952075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.952447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.952830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.952866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.953219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.953577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.953603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.954034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.954281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.954310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.954599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.954982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.955010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.955391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.955748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.955775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.956215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.956567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.956593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.957030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.957410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.957437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.957844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.958248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.958275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.958683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.959053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.959081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.959462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.959714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.959741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.960152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.960582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.960609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.961007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.961384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.961411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.961853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.962241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.962267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.962636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.963009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.963037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.963288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.963639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.963666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.964040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.964462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.964488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.964849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.965199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.965226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.965475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.965864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.965892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.966266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.966649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.966676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.967068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.967447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.967474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.967854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.968243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.968269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.968543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.968794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.968822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.969275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.969724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.969750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.970031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.970279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.970308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.970690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.971073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.971101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.971387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.971713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.981 [2024-04-27 00:10:41.971740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.981 qpair failed and we were unable to recover it. 00:26:11.981 [2024-04-27 00:10:41.972176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.972437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.972466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.972809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.973209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.973237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.973500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.973759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.973786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.974175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.974560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.974588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.974989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.975367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.975394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.975793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.976173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.976201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.976633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.977017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.977046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.977325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.977718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.977745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.978009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.978383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.978409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.978809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.979215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.979243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.979672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.979924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.979953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.980347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.980716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.980743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.981147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.981513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.981541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.981845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.982221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.982248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.982621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.983049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.983078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.983486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.983932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.983960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.984378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.984744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.984772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.985054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.985412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.985438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.985827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.986245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.986272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.986667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.987050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.987078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.987483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.987874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.987902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.988271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.988640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.988667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.989040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.989433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.989460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.989742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.990145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.990173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.990593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.991044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.991072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.991402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.991703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.991729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.992014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.992287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.982 [2024-04-27 00:10:41.992316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.982 qpair failed and we were unable to recover it. 00:26:11.982 [2024-04-27 00:10:41.992680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.993034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.993062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:41.993517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.993928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.993955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:41.994225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.994628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.994655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:41.995057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.995403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.995430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:41.995873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.996278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.996305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:41.996698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.997133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.997161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:41.997505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.997758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.997785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:41.998212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.998591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.998618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:41.999026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.999387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:41.999414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:41.999791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.000198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.000227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.000598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.000982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.001010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.001417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.001799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.001826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.002193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.002549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.002576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.002861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.003258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.003285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.003566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.003986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.004013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.004390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.004781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.004807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.005190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.005603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.005630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.006031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.006403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.006429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.006716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.007097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.007125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.007511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.007902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.007930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.008336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.008708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.008734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.009142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.009542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.009569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.983 qpair failed and we were unable to recover it. 00:26:11.983 [2024-04-27 00:10:42.009865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.983 [2024-04-27 00:10:42.010256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.010284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.010696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.011010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.011038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.011411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.011792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.011819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.012231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.012614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.012641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.012911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.013306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.013333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.013745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.014113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.014142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.014542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.014934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.014962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.015213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.015620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.015647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.016030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.016303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.016329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.016600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.016980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.017008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.017354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.017773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.017800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.018258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.018540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.018566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.018964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.019330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.019357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.019765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.020029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.020057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.020440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.020803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.020830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.021230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.021679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.021706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.022146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.022545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.022572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.022831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.023136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.023174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.023621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.023881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.023913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.024365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.024745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.024773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.025203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.025588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.025615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.025906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.026320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.026347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.026706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.027072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.027100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.027505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.027959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.027987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.028382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.028646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.028676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.029066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.029474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.029501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.029859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.030115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.030141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.984 qpair failed and we were unable to recover it. 00:26:11.984 [2024-04-27 00:10:42.030529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.984 [2024-04-27 00:10:42.030936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.030966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.031231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.031576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.031603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.032007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.032386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.032412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.032816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.033083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.033111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.033513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.033894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.033922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.034150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.034436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.034464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.034873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.035257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.035283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.035679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.035809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.035849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.036238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.036620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.036647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.037071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.037465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.037491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.037906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.038271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.038304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.038664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.039048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.039076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.039513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.039891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.039919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.040322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.040710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.040737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.041145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.041559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.041586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.041967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.042367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.042394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.042770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.043024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.043051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.043449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.043809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.043835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.044197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.044587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.044614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.044993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.045383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.045409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.045868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.046287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.046325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.046689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.047055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.047083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.047459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.047868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.047896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.048274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.048613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.048646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.049051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.049384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.049412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.049812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.050173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.050201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.050580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.050932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.050959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.985 qpair failed and we were unable to recover it. 00:26:11.985 [2024-04-27 00:10:42.051406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.985 [2024-04-27 00:10:42.051752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.051779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.052190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.052569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.052595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.052962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.053332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.053358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.053733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.054083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.054117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.054507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.054907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.054935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.055338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.055731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.055756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.056128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.056479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.056506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.056866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.057198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.057224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.057631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.058011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.058039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.058417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.058803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.058830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.059274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.059543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.059571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.059976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.060342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.060368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.060751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.061058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.061085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.061474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.061831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.061880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.062290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.062760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.062786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.063058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.063351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.063378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.063761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.063998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.064027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.064441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.064812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.064848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.065254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.065648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.065675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.066136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.066520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.066547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.066904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.067305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.067331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.067733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.068155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.068182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.068553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.068944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.068971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.069372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.069754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.069781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.070054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.070448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.070475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.070858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.071098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.071128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.071539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.071959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.071987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.986 qpair failed and we were unable to recover it. 00:26:11.986 [2024-04-27 00:10:42.072390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.986 [2024-04-27 00:10:42.072769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.072796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.073231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.073576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.073603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.073883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.074259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.074286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.074663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.074927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.074957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.075378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.075729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.075755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.076199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.076525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.076552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.076956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.077337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.077363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.077764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.078149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.078177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.078426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.078819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.078856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.079240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.079598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.079626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.079991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.080400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.080426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.080793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.081058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.081086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.081458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.081804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.081831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.082234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.082609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.082636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.083016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.083370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.083396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.083779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.084058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.084087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.084372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.084731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.084758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.085167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.085566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.085593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.085992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.086358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.086385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.086784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.087164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.087192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.087601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.087859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.087889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.088248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.088509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.088539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.088938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.089327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.089354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.089733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.089976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.090006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.090364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.090725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.090752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.091148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.091555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.091581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.092020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.092404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.092432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.987 qpair failed and we were unable to recover it. 00:26:11.987 [2024-04-27 00:10:42.092784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.987 [2024-04-27 00:10:42.093205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.093234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.093513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.093902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.093930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.094312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.094762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.094788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.095239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.095474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.095501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.095893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.096261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.096288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.096684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.097085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.097114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.097376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.097639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.097669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.097945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.098351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.098378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.098756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.099137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.099166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.099567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.099927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.099956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.100393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.100788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.100815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.101094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.101472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.101500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.101912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.102367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.102393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.102768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.103160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.103188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.103505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.103897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.103925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.104299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.104704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.104730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.105001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.105381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.105408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.105807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.106188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.106216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.106604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.107011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.107039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.107456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.107718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.107744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.108178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.108450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.108478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.108884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.109259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.109285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.109735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.110001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.110031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.988 [2024-04-27 00:10:42.110388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.110748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.988 [2024-04-27 00:10:42.110775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.988 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.111149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.111532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.111559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.111956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.112339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.112366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.112762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.113161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.113189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.113564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.113944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.113973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.114346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.114753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.114779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.115230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.115602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.115629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.115926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.116318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.116345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.116788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.117153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.117181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.117578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.117856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.117886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.118168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.118498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.118525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.118921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.119289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.119315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.119710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.120107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.120135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.120536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.120937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.120965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.121353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.121613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.121640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.122033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.122303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.122331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.122744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.123124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.123152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.123552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.123936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.123964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.124321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.124716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.124742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.125145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.125406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.125435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.125894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.126250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.126276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.126688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.127049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.127077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.127482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.127850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.127879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.128255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.128633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.128661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.129071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.129440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.129466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.129882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.130256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.130283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.130689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.131056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.131083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.989 qpair failed and we were unable to recover it. 00:26:11.989 [2024-04-27 00:10:42.131499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.131877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.989 [2024-04-27 00:10:42.131905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.132164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.132452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.132479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.132874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.133207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.133236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.133632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.134010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.134038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.134432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.134794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.134822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.135203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.135578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.135605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.136036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.136396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.136424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.136830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.137230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.137257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.137718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.138095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.138122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.138577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.138915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.138942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.139270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.139656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.139683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.140059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.140440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.140466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.140879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.141239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.141265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.141717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.142090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.142118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.142536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.142896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.142923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.143321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.143708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.143738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.144135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.144522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.144552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.144943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.145356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.145385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.145785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.146165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.146194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.146592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.146973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.147002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.147379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.147721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.147751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.148159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.148539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.148567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.148974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.149351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.149379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.149782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.150125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.150155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.150545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.150923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.150953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.151345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.151723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.151752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.152140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.152519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.990 [2024-04-27 00:10:42.152547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.990 qpair failed and we were unable to recover it. 00:26:11.990 [2024-04-27 00:10:42.152943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.153316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.153345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.153736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.154120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.154150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.154551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.154914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.154944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.155348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.155740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.155769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.156174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.156557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.156586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.156910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.157272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.157300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.157718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.158099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.158129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.158390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.158770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.158798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.159193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.159573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.159603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.159991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.160376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.160403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.160798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.161252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.161284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.161631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.162013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.162043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.162444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.162822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.162859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.163246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.163624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.163664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.164023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.164361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.164389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.164802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.165155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.165185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.165576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.165955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.165986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.166250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.166479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.166507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.166914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.167297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.167327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.167719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.168113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.168144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.168533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.168915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.168947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.169338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.169714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.169743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.170137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.170512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.170540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.170934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.171329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.171364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.171768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.172180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.172211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.172604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.172986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.173017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.991 qpair failed and we were unable to recover it. 00:26:11.991 [2024-04-27 00:10:42.173429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.173767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.991 [2024-04-27 00:10:42.173797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.174081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.174457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.174487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.174881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.175253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.175282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.175688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.176068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.176098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.176501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.176875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.176905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.177293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.177673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.177702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.178077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.178458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.178488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.178882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.179270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.179305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.179558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.179948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.179977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.180372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.180713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.180742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.181125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.181473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.181503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.181895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.182317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.182345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.182727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.183094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.183123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.183541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.183789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.183816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.184247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.184628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.184656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.185058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.185408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.185438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.185828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.186239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.186267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.186659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.187010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.187047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.187311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.187689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.187717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.188109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.188490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.992 [2024-04-27 00:10:42.188519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:11.992 qpair failed and we were unable to recover it. 00:26:11.992 [2024-04-27 00:10:42.188911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.189318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.189349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.189754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.190122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.190151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.190547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.190926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.190957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.191352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.191716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.191745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.192144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.192538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.192567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.192967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.193390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.193419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.193688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.194098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.194127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.194492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.194868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.194898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.195315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.195668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.195697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.195967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.196243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.196272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.196690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.197068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.197098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.197504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.197881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.197911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.198302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.198681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.198710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.260 [2024-04-27 00:10:42.199100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.199482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.260 [2024-04-27 00:10:42.199510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.260 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.199906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.200298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.200327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.200581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.200938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.200968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.201391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.201638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.201665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.202063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.202448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.202477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.202874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.203218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.203247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.203638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.204017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.204047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.204402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.204612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.204642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.205040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.205420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.205450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.205856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.206270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.206299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.206700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.207064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.207096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.207354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.207619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.207648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.207977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.208390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.208420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.208820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.209213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.209243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.209649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.210030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.210059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.210454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.210835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.210877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.211308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.211685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.211715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.212097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.212429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.212459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.212697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.213044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.213075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.213473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.213864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.213894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.214309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.214687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.214717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.214991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.215342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.215371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.215785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.216142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.216172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.216569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.216945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.216974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.217377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.217761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.217789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.218192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.218578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.218606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.218969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.219368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.219396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.261 qpair failed and we were unable to recover it. 00:26:12.261 [2024-04-27 00:10:42.219762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.261 [2024-04-27 00:10:42.220147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.220177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.220570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.220929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.220959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.221323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.221729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.221758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.222112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.222525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.222554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.222947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.223223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.223255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.223649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.224015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.224044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.224220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.224615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.224644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.225040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.225417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.225445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.225850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.226278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.226307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.226715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.226946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.226977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.227385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.227773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.227801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.228212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.228454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.228485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.228736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.229071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.229103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.229492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.229888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.229918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.230327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.230586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.230613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.231015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.231404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.231433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.231823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.232078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.232109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.232518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.232899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.232929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.233329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.233700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.233730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.234097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.234484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.234512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.234766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.235137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.235166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.235524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.235924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.235954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.236350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.236583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.236612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.237004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.237383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.237412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.237850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.238134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.238163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.238562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.238910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.238940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.262 [2024-04-27 00:10:42.239345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.239730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.262 [2024-04-27 00:10:42.239758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.262 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.240150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.240530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.240558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.240957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.241350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.241379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.241741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.242056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.242086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.242354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.242758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.242787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.243067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.243449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.243478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.243873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.244286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.244315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.244578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.244952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.244981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.245370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.245776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.245804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.246129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.246508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.246538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.246793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.247158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.247190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.247556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.247946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.247977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.248367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.248703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.248732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.249121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.249501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.249531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.249938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.250345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.250374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.250660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.251047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.251077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.251476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.251856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.251887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.252219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.252595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.252624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.253005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.253296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.253327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.253737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.254056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.254087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.254485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.254870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.254900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.255310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.255711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.255740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.256134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.256561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.256590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.256980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.257297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.257325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.257568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.257850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.257881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.258250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.258513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.258545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.258915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.259190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.259220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.263 qpair failed and we were unable to recover it. 00:26:12.263 [2024-04-27 00:10:42.259616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.263 [2024-04-27 00:10:42.259865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.259893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.260290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.260656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.260685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.261107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.261240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.261270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.261724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.262114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.262145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.262569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.262860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.262888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.263318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.263692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.263721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.264167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.264433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.264462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.264724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.265108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.265138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.265542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.265905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.265935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.266352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.266610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.266639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.266994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.267379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.267407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.267774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.268072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.268102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.268499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.268757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.268786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.269150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.269511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.269541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.269797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.270186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.270216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.270615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.270943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.270972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.271367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.271834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.271888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.272272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.272575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.272604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.272999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.273359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.273388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.273793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.274189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.274220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.274608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.274969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.274999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.275326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.275707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.275737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.276154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.276572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.276600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.276878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.277260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.277289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.277688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.278069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.278098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.278498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.278736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.278765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.264 [2024-04-27 00:10:42.279207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.279592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.264 [2024-04-27 00:10:42.279621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.264 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.280061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.280416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.280445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.280835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.281090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.281121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.281520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.281909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.281939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.282307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.282721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.282750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.283137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.283478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.283507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.283969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.284365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.284394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.284663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.285017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.285047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.285406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.285783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.285812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.286094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.286346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.286380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.286696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.287089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.287119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.287523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.287824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.287874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.288303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.288662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.288692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.289094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.289351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.289380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.289768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.290152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.290183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.290463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.290816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.290857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.291226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.291594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.291624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.291995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.292373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.292404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.292795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.293222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.293251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.293658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.294046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.294083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.294354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.294726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.294755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.295108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.295368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.295397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.265 [2024-04-27 00:10:42.295754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.296123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.265 [2024-04-27 00:10:42.296154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.265 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.296544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.296935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.296964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.297387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.297765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.297794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.298221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.298557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.298588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.298939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.299319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.299348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.299766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.300111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.300142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.300562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.300936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.300968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.301375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.301677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.301714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.302089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.302472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.302500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.302900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.303284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.303313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.303704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.304088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.304118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.304509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.304884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.304913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.305303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.305695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.305724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.306088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.306432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.306461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.306871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.307315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.307345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.307710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.308091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.308122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.308517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.308910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.308942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.309211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.309588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.309617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.310012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.310388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.310418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.310767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.311146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.311176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.311575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.311918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.311949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.312341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.312720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.312749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.313019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.313347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.313375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.313649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.314051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.314082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.314468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.314857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.314888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.315305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.315618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.315648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.316050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.316393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.316422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.266 qpair failed and we were unable to recover it. 00:26:12.266 [2024-04-27 00:10:42.316828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.266 [2024-04-27 00:10:42.317239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.317269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.317665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.317982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.318013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.318422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.318800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.318828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.319231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.319609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.319639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.320039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.320420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.320450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.320695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.321135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.321166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.321556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.321936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.321966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.322386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.322764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.322792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.323185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.323522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.323553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.323828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.324098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.324128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.324546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.324794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.324824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.325246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.325654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.325683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.326087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.326460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.326489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.326783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.327192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.327221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.327606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.328003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.328033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.328441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.328818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.328857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.329235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.329481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.329508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.329911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.330288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.330318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.330704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.331057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.331086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.331474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.331863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.331893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.332288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.332664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.332693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.333077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.333332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.333361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.333787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.334171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.334202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.334563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.334941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.334986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.335407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.335792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.335821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.336242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.336622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.336650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.267 qpair failed and we were unable to recover it. 00:26:12.267 [2024-04-27 00:10:42.337045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.337428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.267 [2024-04-27 00:10:42.337457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.337855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.338243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.338272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.338625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.339007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.339037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.339435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.339814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.339856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.340128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.340362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.340391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.340786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.341036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.341066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.341459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.341849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.341880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.342199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.342562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.342591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.342855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.343272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.343301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.343578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.343957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.343993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.344382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.344762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.344791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.345193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.345589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.345618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.346026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.346417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.346446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.346853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.347264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.347293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.347680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.348060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.348092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.348522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.348781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.348811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.349254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.349632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.349662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.350062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.350449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.350478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.350880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.351289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.351319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.351689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.352057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.352089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.352475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.352773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.352804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.353231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.353617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.353647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.354024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.354403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.354431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.354823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.355257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.355286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.355704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.356101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.356132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.356546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.356926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.356956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.357328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.357697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.357726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.268 qpair failed and we were unable to recover it. 00:26:12.268 [2024-04-27 00:10:42.358104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.268 [2024-04-27 00:10:42.358443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.358471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.358897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.359297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.359326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.359684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.359947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.359978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.360339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.360715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.360744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.361113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.361373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.361404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.361803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.362207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.362236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.362625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.363019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.363049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.363441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.363686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.363718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.363975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.364357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.364386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.364739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.365122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.365153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.365534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.365945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.365975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.366371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.366745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.366774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.367209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.367585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.367613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.368004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.368268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.368299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.368681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.369050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.369081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.369472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.369856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.369887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.370307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.370562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.370592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.370956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.371347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.371376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.371771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.372172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.372202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.372596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.372918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.372949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.373336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.373713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.373742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.374111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.374470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.374498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.374929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.375318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.375346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.375741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.376194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.376225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.376620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.377005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.377035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.377444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.377797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.377826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.269 [2024-04-27 00:10:42.378262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.378644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.269 [2024-04-27 00:10:42.378673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.269 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.379033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.379414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.379443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.379834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.380248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.380277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.380667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.380934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.380965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.381352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.381750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.381778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.382038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.382418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.382449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.382850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.383278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.383307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.383705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.384052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.384082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.384481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.384857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.384887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.385304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.385693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.385721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.386093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.386475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.386503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.386909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.387302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.387331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.387699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.388102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.388133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.388401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.388778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.388806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.389201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.389582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.389611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.390017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.390361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.390389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.390656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.391031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.391061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.391440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.391817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.391859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.392255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.392620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.392649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.393044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.393423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.393452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.393854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.394245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.394275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.394667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.395083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.395113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.395504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.395925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.395955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.270 qpair failed and we were unable to recover it. 00:26:12.270 [2024-04-27 00:10:42.396350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.270 [2024-04-27 00:10:42.396731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.396760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.397121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.397500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.397529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.397936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.398311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.398339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.398760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.399139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.399169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.399595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.399980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.400010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.400404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.400824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.400863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.401247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.401508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.401538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.401928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.402308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.402338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.402767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.403145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.403175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.403445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.403829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.403873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.404160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.404569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.404597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.404997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.405398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.405426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.405825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.406239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.406268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.406529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.406822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.406862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.407248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.407627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.407656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.407974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.408252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.408279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.408676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.409051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.409080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.409477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.409858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.409887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.410305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.410687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.410716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.411096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.411461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.411497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.411905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.412288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.412317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.412712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.413094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.413124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.413532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.413795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.413826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.414224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.414571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.414600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.414976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.415224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.415256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.415646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.416025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.416055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.271 qpair failed and we were unable to recover it. 00:26:12.271 [2024-04-27 00:10:42.416412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.416802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.271 [2024-04-27 00:10:42.416831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.417148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.417548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.417576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.417983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.418365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.418393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.418664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.419041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.419079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.419509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.419867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.419897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.420154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.420550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.420579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.420974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.421369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.421398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.421793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.422155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.422185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.422580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.422981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.423011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.423436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.423816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.423857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.424290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.424552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.424581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.424950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.425303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.425333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.425610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.425984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.426013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.426382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.426761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.426795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.427191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.427568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.427596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.427995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.428389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.428417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.428809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.429229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.429259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.429525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.429902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.429932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.430327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.430710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.430739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.431112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.431500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.431528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.431926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.432308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.432336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.432741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.433167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.433196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.433551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.433927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.433956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.434340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.434719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.434748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.435114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.435476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.435505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.435933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.436318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.436347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.436742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.437119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.272 [2024-04-27 00:10:42.437150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.272 qpair failed and we were unable to recover it. 00:26:12.272 [2024-04-27 00:10:42.437542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.437911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.437942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.438393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.438743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.438773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.439179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.439600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.439629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.439997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.440393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.440421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.440827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.441218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.441247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.441659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.442044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.442074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.442469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.442759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.442788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.443204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.443579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.443608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.443986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.444377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.444407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.444769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.445157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.445187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.445580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.445967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.445996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.446388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.446769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.446797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.447193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.447606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.447638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.447997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.448398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.448427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.448817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.449237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.449266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.449622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.449976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.450006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.450405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.450783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.450811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.451214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.451471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.451501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.451890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.452279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.452307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.452703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.453055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.453084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.453480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.453864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.453893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.454296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.454660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.454687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.455056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.455441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.455470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.455879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.456284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.456312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.273 qpair failed and we were unable to recover it. 00:26:12.273 [2024-04-27 00:10:42.456588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.456978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.273 [2024-04-27 00:10:42.457008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.457395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.457816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.457857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.458246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.458591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.458620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.459047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.459420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.459450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.459851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.460251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.460282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.460668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.461049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.461080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.461469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.461856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.461886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.462304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.462570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.462599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.463005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.463280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.463311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.463657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.464036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.464066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.464474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.464809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.464858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.465237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.465617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.465646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.466044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.466467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.466495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.466897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.467242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.467273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.467524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.467911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.467941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.468335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.468724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.468754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.469007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.469389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.469420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.469808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.470233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.470262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.470651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.471031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.471061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.471459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.471726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.471756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.472120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.472499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.274 [2024-04-27 00:10:42.472527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.274 qpair failed and we were unable to recover it. 00:26:12.274 [2024-04-27 00:10:42.472889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.473283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.473316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.473704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.474065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.474095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.474472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.474809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.474852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.475157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.475512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.475540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.475932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.476327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.476355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.476748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.477128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.477156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.477574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.477953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.477982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.478383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.478641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.478671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.479061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.479435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.479464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.479857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.480285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.480315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.480698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.481061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.481091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.481481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.481864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.481893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.482271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.482654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.482684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.483078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.483343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.483375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.483768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.484143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.484173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.484445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.484823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.484894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.485288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.485664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.485693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.486088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.486331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.486362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.545 qpair failed and we were unable to recover it. 00:26:12.545 [2024-04-27 00:10:42.486754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.545 [2024-04-27 00:10:42.487089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.487119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.487474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.487721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.487750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.488038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.488416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.488446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.488802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.489141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.489171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.489538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.489936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.489966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.490379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.490758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.490786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.491053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.491474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.491505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.491898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.492333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.492363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.492765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.493152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.493183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.493578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.493976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.494006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.494180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.494586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.494614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.495013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.495294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.495323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.495719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.496126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.496154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.496553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.496948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.496978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.497377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.497759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.497788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.498176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.498573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.498601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.498996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.499363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.499391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.499802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.500188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.500219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.500615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.501005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.501035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.501325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.501604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.501637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.501996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.502400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.502428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.502828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.503139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.503170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.503568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.503798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.503825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.504091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.504345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.504373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.504760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.505195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.505226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.505407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.505675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.505705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.506072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.506452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.546 [2024-04-27 00:10:42.506480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.546 qpair failed and we were unable to recover it. 00:26:12.546 [2024-04-27 00:10:42.506853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.507259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.507289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.507683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.508006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.508036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.508443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.508722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.508754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.509188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.509593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.509622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.509891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.510273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.510302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.510737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.511024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.511053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.511268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.511422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.511448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.511873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.512294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.512324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.512719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.513087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.513118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.513521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.513909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.513939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.514220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.514479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.514506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.514901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.515294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.515322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.515728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.516014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.516045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.516298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.516632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.516660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.517070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.517341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.517370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.517757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.518142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.518174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.518581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.518979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.519008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.519437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.519818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.519860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.520280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.520449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.520479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.520858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.521249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.521278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.521674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.522054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.522086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.522385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.522763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.522792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.523163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.523549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.523576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.524032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.524413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.524441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.524695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.525036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.525065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.525463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.525827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.525870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.547 [2024-04-27 00:10:42.526142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.526530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.547 [2024-04-27 00:10:42.526559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.547 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.526935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.527198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.527232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.527621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.528001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.528030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.528387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.528663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.528695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.529094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.529463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.529492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.529890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.530042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.530070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.530434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.530816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.530856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.531239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.531620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.531648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.532000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.532396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.532426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.532794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.533204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.533235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.533700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.534093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.534123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.534524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.534908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.534944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.535216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.535605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.535633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.536012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.536386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.536414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.536797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.537265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.537295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.537565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.537966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.537997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.538241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.538572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.538600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.538960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.539367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.539395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.539752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.540140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.540170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.540568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.540949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.540978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.541371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.541728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.541757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.542125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.542501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.542537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.542928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.543300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.543330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.543731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.544141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.544171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.544435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.544827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.544867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.545265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.545643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.545671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.546064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.546313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.546342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.548 qpair failed and we were unable to recover it. 00:26:12.548 [2024-04-27 00:10:42.546617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.547063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.548 [2024-04-27 00:10:42.547092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.547500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.547890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.547920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.548331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.548710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.548739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.549114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.549476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.549506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.549914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.550309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.550343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.550762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.551149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.551178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.551569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.551948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.551977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.552397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.552706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.552736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.553114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.553465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.553493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.553857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.554255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.554283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.554649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.555041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.555070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.555480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.555874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.555903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.556287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.556646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.556675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.556985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.557373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.557401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.557794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.558214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.558244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.558595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.558950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.558980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.559350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.559709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.559738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.560092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.560471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.560499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.560858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.561263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.561292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.561689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.562048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.562078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.562432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.562813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.562854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.563232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.563611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.563640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.564035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.564412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.564442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.564816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.565209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.565239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.565633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.565870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.565900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.566201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.566579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.566609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.566994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.567261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.549 [2024-04-27 00:10:42.567294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.549 qpair failed and we were unable to recover it. 00:26:12.549 [2024-04-27 00:10:42.567693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.568044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.568075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.568463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.568831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.568871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.569303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.569678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.569706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.570094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.570468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.570498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.570887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.571170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.571197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.571604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.571980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.572010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.572403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.572768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.572798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.573182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.573462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.573493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.573870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.574264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.574293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.574688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.575086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.575115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.575368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.575707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.575735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.576107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.576483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.576512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.576906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.577292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.577322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.577672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.578048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.578078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.578436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.578827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.578867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.579198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.579580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.579608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.580003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.580407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.580436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.580803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.581149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.581179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.581574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.581955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.581986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.582257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.582600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.582628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.582995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.583397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.583425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.550 qpair failed and we were unable to recover it. 00:26:12.550 [2024-04-27 00:10:42.583818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.584196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.550 [2024-04-27 00:10:42.584225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.584578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.584941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.584972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.585371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.585751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.585780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.586118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.586501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.586529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.586927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.587323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.587353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.587751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.588121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.588150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.588419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.588787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.588817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.589138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.589516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.589546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.589941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.590341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.590370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.590762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.591140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.591171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.591562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.591956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.591985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.592356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.592724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.592753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.593115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.593448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.593476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.593924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.594312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.594340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.594730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.595112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.595142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.595499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.595896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.595925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.596328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.596704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.596733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.597106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.597486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.597514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.597873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.598157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.598188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.598574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.598965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.598995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.599408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.599787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.599815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.600218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.600594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.600623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.600987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.601386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.601415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.601804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.602226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.602256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.602610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.603009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.603039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.603436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.603812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.603851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.604220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.604602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.604630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.551 qpair failed and we were unable to recover it. 00:26:12.551 [2024-04-27 00:10:42.604998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.551 [2024-04-27 00:10:42.605360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.605391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.605784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.606134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.606164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.606554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.606931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.606961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.607366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.607731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.607759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.608102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.608474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.608501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.608826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.609198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.609226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.609622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.610010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.610039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.610391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.610774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.610802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.611172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.611560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.611589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.611983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.612389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.612417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.612818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.613201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.613231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.613586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.613966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.613996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.614382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.614760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.614790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.615164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.615579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.615608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.616018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.616254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.616283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.616614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.616986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.617016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.617382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.617764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.617793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.618212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.618594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.618624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.618980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.619356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.619385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.619785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.620147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.620177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.620571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.620925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.620954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.621315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.621702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.621730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.622096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.622334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.622365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.622722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.623108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.623137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.623542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.623907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.623936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.624294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.624553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.624584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.552 [2024-04-27 00:10:42.624947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.625349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.552 [2024-04-27 00:10:42.625377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.552 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.625766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.626148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.626178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.626571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.626948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.626977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.627388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.627768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.627797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.628199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.628610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.628641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.629010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.629388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.629417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.629814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.630090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.630123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.630487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.630904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.630934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.631333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.631587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.631615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.632016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.632280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.632312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.632699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.633059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.633090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.633484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.633867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.633896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.634322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.634701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.634729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.635100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.635447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.635477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.635822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.636207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.636238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.636621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.637014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.637043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.637451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.637834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.637873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.638228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.638572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.638601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.639068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.639456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.639484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.639883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.640261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.640289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.640692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.641044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.641074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.641476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.641861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.641893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.642315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.642688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.642717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.643096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.643465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.643494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.643889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.644247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.644281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.644671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.645047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.645077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.645477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.645858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.645888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.553 qpair failed and we were unable to recover it. 00:26:12.553 [2024-04-27 00:10:42.646319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.553 [2024-04-27 00:10:42.646699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.646728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.647135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.647519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.647548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.647953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.648336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.648365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.648709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.649052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.649081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.649471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.649724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.649752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.650112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.650490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.650519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.650908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.651259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.651287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.651671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.652052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.652087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.652474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.652858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.652888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.653274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.653684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.653712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.654100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.654480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.654509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.654908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.655295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.655324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.655720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.656115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.656144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.656534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.656913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.656943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.657309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.657593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.657623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.658013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.658371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.658400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.658800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.659181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.659211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.659601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.659869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.659907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.660302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.660685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.660714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.661076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.661458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.661487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.661878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.662282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.662310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.662707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.663097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.663126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.663531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.663891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.663920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.664303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.664683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.664712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.665090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.665465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.665494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.665889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.666179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.666206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.666606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.666933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.666962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.554 qpair failed and we were unable to recover it. 00:26:12.554 [2024-04-27 00:10:42.667417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.554 [2024-04-27 00:10:42.667795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.667828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.555 qpair failed and we were unable to recover it. 00:26:12.555 [2024-04-27 00:10:42.668226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.668604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.668633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.555 qpair failed and we were unable to recover it. 00:26:12.555 [2024-04-27 00:10:42.669035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.669382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.669410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.555 qpair failed and we were unable to recover it. 00:26:12.555 [2024-04-27 00:10:42.669695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.670050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.670080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.555 qpair failed and we were unable to recover it. 00:26:12.555 [2024-04-27 00:10:42.670471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.670784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.670812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.555 qpair failed and we were unable to recover it. 00:26:12.555 [2024-04-27 00:10:42.671087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.671464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.671493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.555 qpair failed and we were unable to recover it. 00:26:12.555 [2024-04-27 00:10:42.671889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.672169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.672199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.555 qpair failed and we were unable to recover it. 00:26:12.555 [2024-04-27 00:10:42.672598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.672980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.673009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.555 qpair failed and we were unable to recover it. 00:26:12.555 [2024-04-27 00:10:42.673408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.673783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.555 [2024-04-27 00:10:42.673811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbc70000b90 with addr=10.0.0.2, port=4420 00:26:12.555 qpair failed and we were unable to recover it. 00:26:12.555 [2024-04-27 00:10:42.674073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262160 is same with the state(5) to be set 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Write completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Write completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Write completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.555 Read completed with error (sct=0, sc=8) 00:26:12.555 starting I/O failed 00:26:12.556 Read completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Read completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Read completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Read completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Read completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Read completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Read completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Read completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Write completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Write completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Read completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Read completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Write completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 Read completed with error (sct=0, sc=8) 00:26:12.556 starting I/O failed 00:26:12.556 [2024-04-27 00:10:42.674430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.556 [2024-04-27 00:10:42.674872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.675360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.675423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.675814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.676325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.676388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.676777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.677265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.677328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.677681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.678159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.678222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.678617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.679107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.679171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.679563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.680471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.680505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.680901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.681323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.681337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.681715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.682045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.682059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.682432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.682697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.682710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.683087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.683467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.683479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.683857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.684188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.684202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.684578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.684936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.684948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.685303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.685613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.685624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.685982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.686227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.686239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.686501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.686771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.686782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.687021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.687384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.687397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.687772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.688121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.688137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.688518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.688909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.688922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.689295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.689648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.689661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.690132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.690392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.690404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.690776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.691139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.691152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.691505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.691905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.691918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.692262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.692511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.692523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.556 qpair failed and we were unable to recover it. 00:26:12.556 [2024-04-27 00:10:42.692877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.556 [2024-04-27 00:10:42.693264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.693276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.693632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.694011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.694024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.694408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.694832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.694868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.695205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.695439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.695452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.695788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.696201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.696214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.696554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.696903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.696917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.697297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.697663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.697677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.698016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.698384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.698396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.698771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.699109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.699123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.699474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.699914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.699926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.700293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.700443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.700457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.700684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.700916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.700929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.701205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.701542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.701554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.701904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.702295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.702307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.702702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.702920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.702932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.703348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.703702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.703715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.704058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.704362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.704375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.704728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.705089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.705102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.705460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.705808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.705820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.706084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.706474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.706486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.706897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.707283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.707295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.707646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.707990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.708003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.708324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.708684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.708696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.709050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.709437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.709450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.709808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.710179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.710192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.710561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.710944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.710956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.711328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.711678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.711692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.557 qpair failed and we were unable to recover it. 00:26:12.557 [2024-04-27 00:10:42.712048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.557 [2024-04-27 00:10:42.712400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.712415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.712815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.713175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.713190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.713551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.713797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.713811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.714192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.714561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.714576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.714944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.715259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.715275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.715674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.716008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.716022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.716379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.716767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.716778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.717118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.717494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.717506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.717877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.718256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.718269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.718620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.718996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.719010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.719376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.719761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.719775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.720025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.720363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.720375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.720724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.721100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.721113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.721485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.721867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.721882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.722251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.722637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.722649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.723002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.723373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.723386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.724277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.724625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.724638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.724892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.725273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.725300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.725605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.725973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.725986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.726352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.726731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.726743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.727075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.727502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.727513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.727839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.728097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.728108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.728453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.728790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.728801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.729005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.729344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.729354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.729728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.730096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.730107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.730477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.730713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.730723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.731112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.731478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.731488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.558 [2024-04-27 00:10:42.731863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.732226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.558 [2024-04-27 00:10:42.732241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.558 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.732602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.732970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.732980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.733328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.733709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.733719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.734109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.734479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.734488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.734863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.735229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.735238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.735600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.735970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.735980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.736243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.736590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.736600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.736856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.737162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.737174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.737411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.737742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.737753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.738094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.738469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.738480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.738845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.739218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.739227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.739584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.739960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.739970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.740317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.740535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.740547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.740868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.741213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.741222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.741565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.741945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.741955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.742306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.742679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.742690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.742916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.743148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.743159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.743380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.743785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.743794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.744063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.744422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.744432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.744809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.745184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.745194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.745572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.745956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.745967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.746347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.746678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.746688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.747067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.747377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.747386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.747728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.748100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.748110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.748472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.748877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.748887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.749145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.749409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.749418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.749785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.750143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.750155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.559 [2024-04-27 00:10:42.750523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.750946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.559 [2024-04-27 00:10:42.750966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.559 qpair failed and we were unable to recover it. 00:26:12.560 [2024-04-27 00:10:42.751193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.751415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.751424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.560 qpair failed and we were unable to recover it. 00:26:12.560 [2024-04-27 00:10:42.751704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.752046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.752055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.560 qpair failed and we were unable to recover it. 00:26:12.560 [2024-04-27 00:10:42.752429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.752787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.752796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.560 qpair failed and we were unable to recover it. 00:26:12.560 [2024-04-27 00:10:42.753134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.753411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.753420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.560 qpair failed and we were unable to recover it. 00:26:12.560 [2024-04-27 00:10:42.753785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.754002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.754013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.560 qpair failed and we were unable to recover it. 00:26:12.560 [2024-04-27 00:10:42.754265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.754637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.754647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.560 qpair failed and we were unable to recover it. 00:26:12.560 [2024-04-27 00:10:42.754993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.755342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.755352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.560 qpair failed and we were unable to recover it. 00:26:12.560 [2024-04-27 00:10:42.755728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.756134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.560 [2024-04-27 00:10:42.756144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.560 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.756512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.756772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.756783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.757134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.757444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.757455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.757720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.758091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.758103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.758354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.758676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.758686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.759071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.759283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.759292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.759628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.760022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.760032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.760384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.760759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.760768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.761109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.761494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.761503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.761738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.762127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.762137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.762473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.762817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.762827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.763186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.763530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.763540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.763777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.764142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.764151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.764519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.764891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.764902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.765249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.765473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.824 [2024-04-27 00:10:42.765484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.824 qpair failed and we were unable to recover it. 00:26:12.824 [2024-04-27 00:10:42.765829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.766207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.766217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.766589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.766944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.766958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.767309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.767688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.767698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.768072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.768453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.768463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.768686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.769014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.769025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.769380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.769754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.769764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.770181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.770529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.770538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.770908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.771253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.771262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.771636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.771860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.771871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.772121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.772491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.772501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.772751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.773118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.773127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.773489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.773843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.773853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.774212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.774593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.774603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.775019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.775400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.775411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.775771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.776118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.776130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.776504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.776899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.776909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.777282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.777658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.777668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.777943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.778337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.778347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.778722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.779140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.779149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.779510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.779722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.779732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.780050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.780402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.780411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.780815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.781033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.781043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.781390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.781772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.781781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.782026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.782294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.782304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.782629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.782986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.782996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.783230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.783542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.783552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.783944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.784350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.784360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.784735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.785079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.785090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.785489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.785849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.785859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.786195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.786554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.786563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.786932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.787289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.787298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.787713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.788122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.788132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.788478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.788894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.788905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.789256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.789620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.789630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.789995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.790365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.790374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.790439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.790658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.790667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.790991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.791309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.791318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.791715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.791947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.791958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.792343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.792772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.792782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.793114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.793482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.793492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.793865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.794256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.794265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.794604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.795008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.795018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.795345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.795570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.795580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.795909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.796262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.796271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.796644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.797009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.797019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.797396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.797564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.797574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.797822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.798237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.798248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.798456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.798787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.798796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.799227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.799649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.799659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.799876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.800202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.800212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.800550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.800890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.800901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.825 [2024-04-27 00:10:42.801285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.801571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.825 [2024-04-27 00:10:42.801581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.825 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.801993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.802375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.802387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.802731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.803110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.803120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.803460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.803841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.803851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.804227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.804605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.804614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.804979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.805368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.805377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.805569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.805948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.805959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.806330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.806701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.806711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.807070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.807452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.807461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.807825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.808229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.808238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.808602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.809088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.809140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.809538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.809922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.809934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.810262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.810638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.810647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.811055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.811402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.811411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.811768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.812130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.812140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.812541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.812858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.812869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.813160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.813515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.813525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.813889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.814133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.814145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.814494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.814867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.814877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.815212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.815519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.815528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.815881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.816261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.816270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.816641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.816985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.816995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.817378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.817747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.817757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.818091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.818466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.818476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.818842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.819211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.819221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.819571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.819953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.819963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.820374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.820585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.820594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.820913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.821226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.821237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.821599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.821851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.821861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.822222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.822606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.822616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.823002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.823226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.823235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.823562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.823925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.823937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.824263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.824511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.824520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.824892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.825323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.825333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.825659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.826061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.826071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.826268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.826577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.826586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.826938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.827318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.827327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.827697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.827938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.827948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.828311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.828709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.828718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.829078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.829463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.829473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.829822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.830065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.830075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.830414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.830788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.830798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.831134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.831511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.831521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.831728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.832042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.832052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.832375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.832749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.832759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.833095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.833467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.833476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.833813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.834188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.834198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.834583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.834967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.834977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.835310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.835699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.835708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.835936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.836315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.836325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.836682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.836947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.836958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.837336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.837720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.837730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.826 qpair failed and we were unable to recover it. 00:26:12.826 [2024-04-27 00:10:42.838102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.838469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.826 [2024-04-27 00:10:42.838481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.838824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.839156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.839166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.839543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.839934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.839944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.840313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.840681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.840691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.841046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.841363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.841372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.841703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.842097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.842106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.842473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.842678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.842688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.843040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.843418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.843428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.843641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.843962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.843972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.844319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.844688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.844697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.845054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.845430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.845442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.845799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.846138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.846148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.846382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.846742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.846752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.847135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.847484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.847493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.847809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.848181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.848191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.848559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.848926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.848935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.849248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.849582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.849591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.849947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.850371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.850381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.850747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.851100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.851110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.851468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.851817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.851826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.852183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.852559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.852569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.852922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.853305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.853315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.853682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.854025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.854034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.854236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.854598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.854608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.854858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.855205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.855216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.855566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.855778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.855788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.856006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.856218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.856227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.856604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.856921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.856931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.857291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.857496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.857506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.857830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.858066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.858075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.858296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.858621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.858631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.858995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.859370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.859379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.859674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.860035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.860045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.860400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.860764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.860773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.861103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.861470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.861479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.861716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.861944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.861955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.862304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.862632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.862642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.862995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.863363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.863372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.863729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.864069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.864081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.864441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.864800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.864810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.865112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.865489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.865499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.865858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.866194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.866204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.866531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.866900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.866910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.867276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.867508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.827 [2024-04-27 00:10:42.867518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.827 qpair failed and we were unable to recover it. 00:26:12.827 [2024-04-27 00:10:42.867846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.868209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.868218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.868419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.868650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.868659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.868986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.869293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.869302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.869642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.869975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.869985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.870340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.870704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.870713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.871759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.872131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.872144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.872492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.872854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.872864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.873218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.873584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.873593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.873822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.874170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.874181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.874603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.874921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.874932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.875300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.875662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.875672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.876032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.876363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.876372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.876722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.877073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.877082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.877283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.877631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.877641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.877995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.878355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.878365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.878727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.879083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.879094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.879431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.879806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.879816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.880068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.880431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.880443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.880793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.881049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.881060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.881401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.881752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.881762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.882075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.882524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.882534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.882864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.883211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.883220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.883573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.883797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.883806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.884007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.884293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.884302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.884636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.885003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.885013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.885447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.885765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.885775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.886105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.886464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.886474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.886849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.887168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.887177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.887430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.887700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.887709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.887961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.888321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.888331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.888684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.889053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.889063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.889398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.889686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.889695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.889902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.890157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.890168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.890582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.890957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.890967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.891301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.891517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.891528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.891891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.892149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.892159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.892522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.892854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.892864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.893234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.893566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.893575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.893941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.894276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.894287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.894507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.894809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.894820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.895179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.895401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.895410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.895726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.895966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.895978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.896335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.896713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.896724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.897076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.897452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.897462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.897666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.897967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.897977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.898346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.898559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.898569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.898906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.899164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.899175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.899588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.899919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.899930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.900250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.900501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.900512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.900713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.901052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.901063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.901261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.901615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.901626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.828 qpair failed and we were unable to recover it. 00:26:12.828 [2024-04-27 00:10:42.902038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.828 [2024-04-27 00:10:42.902402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.902412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.902613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.902881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.902892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.903221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.903602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.903612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.903961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.904356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.904366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.904719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.905101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.905111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.905344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.905646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.905656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.906013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.906281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.906291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.906632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.907028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.907038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.907374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.907641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.907651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.908030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.908247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.908258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.908529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.908888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.908900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.909278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.909480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.909490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.909780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.910187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.910196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.910534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.910859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.910868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.911205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.911516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.911525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.911785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.912254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.912264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.912602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.912847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.912857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.913252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.913634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.913647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.914090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.914391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.914399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.914749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.914864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.914873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.915098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.915414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.915423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.915661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.916016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.916026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.916366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.916555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.916564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.916938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.917300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.917309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.917646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.917989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.917999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.918200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.918538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.918547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.918883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.919240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.919248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.919568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.919943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.919953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.920274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.920602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.920612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.920985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.921314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.921323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.921685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.922024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.922034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.922358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.922700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.922710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.923086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.923438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.923447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.923799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.924147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.924157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.924392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.924724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.924733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.924917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.925333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.925343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.925700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.925920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.925930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.926254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.926586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.926595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.926896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.927283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.927292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.927526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.927874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.927884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.928236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.928601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.928610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.928849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.929195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.929204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.929561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.929930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.929940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.930389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.930750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.930759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.931136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.931488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.931497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.931854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.932179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.932188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.829 [2024-04-27 00:10:42.932505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.932746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.829 [2024-04-27 00:10:42.932754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.829 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.933038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.933372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.933381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.933730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.934082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.934092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.934446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.934804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.934812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.935049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.935400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.935409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.935763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.936100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.936110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.936457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.936833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.936858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.937216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.937549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.937558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.937939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.938285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.938294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.938489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.938731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.938741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.938926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.939226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.939236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.939597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.939840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.939850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.940168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.940535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.940544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.940904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.941186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.941195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.941528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.941890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.941900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.942258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.942629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.942638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.942980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.943331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.943340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.943689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.944022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.944032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.944398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.944603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.944613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.944859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.945110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.945119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.945385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.945678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.945687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.946087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.946444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.946454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.946812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.947178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.947190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.947406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.947771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.947781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.948134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.948333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.948342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.948584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.948913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.948923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.949270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.949635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.949644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.949986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.950189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.950198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.950589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.950949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.950959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.951318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.951677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.951688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.952039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.952406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.952415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.952775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.953150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.953159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.953506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.953868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.953879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.954194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.954489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.954498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.954812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.955181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.955191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.955532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.955933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.955943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.956262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.956621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.956630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.956844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.957157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.957167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.957515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.957882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.957892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.958244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.958604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.958613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.958989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.959308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.959317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.959646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.960008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.960018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.960365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.960728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.960737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.961124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.961483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.961492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.961858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.962230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.962239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.962593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.962957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.962967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.963324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.963651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.963659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.964011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.964272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.964282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.964642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.965012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.965021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.965372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.965733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.965742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.966099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.966459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.966468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.966820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.967209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.967219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.830 [2024-04-27 00:10:42.967512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.967845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.830 [2024-04-27 00:10:42.967855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.830 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.968208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.968577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.968586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.968927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.969245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.969254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.969603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.969960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.969969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.970310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.970674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.970683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.971004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.971330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.971340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.971547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.971748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.971757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.972036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.972343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.972352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.972721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.973090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.973100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.973433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.973759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.973767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.974112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.974483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.974491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.974845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.975171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.975181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.975540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.975898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.975908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.976108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.976429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.976438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.976751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.977086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.977096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.977313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.977669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.977678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.978023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.978383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.978392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.978725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.979079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.979089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.979447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.979821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.979831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.980181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.980536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.980545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.980893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.981114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.981123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.981452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.981783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.981795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.982005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.982356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.982365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.982713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.982970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.982980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.983329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.983656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.983665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.984007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.984380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.984389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.984741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.985137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.985146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.985495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.985850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.985860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.986204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.986565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.986575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.986915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.987243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.987252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.987609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.987987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.987996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.988352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.988678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.988690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.989046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.989408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.989417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.989635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.989898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.989907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.990197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.990547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.990556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.990910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.991241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.991249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.991603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.991946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.991956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.992331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.992658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.992667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.993015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.993373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.993382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.993745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.994088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.994098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.994431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.994760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.994769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.995052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.995416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.995425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.995658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.996009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.996019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.996359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.996737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.996746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.996973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.997193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.997204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.997539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.997894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.997903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.998240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.998453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.998463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.998694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.999027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.999037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:42.999376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.999742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:42.999751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.831 qpair failed and we were unable to recover it. 00:26:12.831 [2024-04-27 00:10:43.000086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.831 [2024-04-27 00:10:43.000457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.000466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.000725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.001049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.001058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.001315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.001617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.001626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.001976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.002316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.002326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.002680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.003016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.003025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.003353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.003709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.003718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.003950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.004160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.004169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.004365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.004695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.004705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.005066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.005395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.005405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.005751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.006089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.006099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.006429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.006796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.006805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.007169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.007550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.007560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.007954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.008168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.008177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.008538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.008883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.008893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.009243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.009570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.009579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.009935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.010304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.010313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.010649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.010975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.010984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.011197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.011418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.011428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.011629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.011984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.011994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.012362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.012473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.012482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.012886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.013056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.013065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.013283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.013606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.013615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.013842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.014177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.014186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.014531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.014852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.014862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.015184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.015503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.015512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.015823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.016251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.016260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.016585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.016950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.016960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.017317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.017533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.017542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.017879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.018207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.018216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.018563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.018650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.018658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.018983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.019356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.019365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.019712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.020063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.020073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.020396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.020718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.020727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.020999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.021325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.021335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.021693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.022027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.022036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.022383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.022731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.022740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.023080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.023451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.023460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.023797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.024145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.024155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.024502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.024860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.024870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.025098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.025390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.025399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.025737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.026085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.026095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.026444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.026703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.026712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.027043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.027403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.027412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.027750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.028110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.028119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.028469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.028845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.028855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.029203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.029505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.832 [2024-04-27 00:10:43.029514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.832 qpair failed and we were unable to recover it. 00:26:12.832 [2024-04-27 00:10:43.029856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.030154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.030164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.030516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.030874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.030884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.031221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.031584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.031593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.031940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.032301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.032310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.032657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.033015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.033025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.033384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.033729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.033738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.034081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.034440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.034449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.034752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.035087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.035096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.035438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.035798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.035806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.036154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.036493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.036502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.036808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.037164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.037174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.037506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.037867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.037877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.038217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.038574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.038583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.038934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.039265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.833 [2024-04-27 00:10:43.039274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-27 00:10:43.039615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.039993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.040005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.102 qpair failed and we were unable to recover it. 00:26:13.102 [2024-04-27 00:10:43.040343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.040669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.040679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.102 qpair failed and we were unable to recover it. 00:26:13.102 [2024-04-27 00:10:43.041036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.041241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.041251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.102 qpair failed and we were unable to recover it. 00:26:13.102 [2024-04-27 00:10:43.041455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.041679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.041690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.102 qpair failed and we were unable to recover it. 00:26:13.102 [2024-04-27 00:10:43.042043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.042260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.042271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.102 qpair failed and we were unable to recover it. 00:26:13.102 [2024-04-27 00:10:43.042615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.042831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.042856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.102 qpair failed and we were unable to recover it. 00:26:13.102 [2024-04-27 00:10:43.043189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.043443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.102 [2024-04-27 00:10:43.043452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.102 qpair failed and we were unable to recover it. 00:26:13.102 [2024-04-27 00:10:43.043809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.044031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.044041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.044344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.044662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.044671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.045036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.045402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.045410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.045662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.045879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.045889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.046252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.046615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.046623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.046976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.047338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.047347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.047706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.048095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.048104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.048459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.048704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.048713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.049080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.049440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.049449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.049811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.050169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.050178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.050535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.050873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.050882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.051297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.051653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.051662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.052074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.052407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.052416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.052791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.053148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.053157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.053507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.053801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.053810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.054167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.054469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.054478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.054834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.055174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.055183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.055383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.055717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.055728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.056072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.056403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.056412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.056767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.057144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.057153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.057503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.057816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.057825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.058194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.058556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.058565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.058911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.059252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.059261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.059587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.059904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.059914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.060243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.060594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.060603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.060929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.061286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.061295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.103 [2024-04-27 00:10:43.061647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.061848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.103 [2024-04-27 00:10:43.061858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.103 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.062180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.062531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.062540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.062783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.063028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.063039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.063376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.063700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.063709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.064061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.064374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.064383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.064742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.065078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.065088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.065446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.065794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.065803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.066150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.066506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.066515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.066826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.067038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.067048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.067398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.067762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.067771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.068111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.068421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.068430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.068669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.069008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.069017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.069393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.069751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.069760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.070098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.070452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.070461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.070819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.071172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.071182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.071530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.071888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.071898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.072236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.072587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.072597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.072950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.073276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.073286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.073655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.074028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.074038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.074368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.074728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.074738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.074934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.075283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.075292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.075630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.075980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.075990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.076300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.076671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.076680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.077023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.077340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.077349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.077690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.078045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.078055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.104 [2024-04-27 00:10:43.078407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.078700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.104 [2024-04-27 00:10:43.078709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.104 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.079035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.079345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.079354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.079671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.080031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.080041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.080274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.080598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.080607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.081016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.081325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.081334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.081667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.082020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.082030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.082386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.082744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.082753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.083050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.083376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.083385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.083753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.084083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.084092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.084292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.084697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.084706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.085076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.085433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.085442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.085784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.086147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.086156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.086514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.086845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.086855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.087207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.087373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.087382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.087698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.088057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.088067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.088418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.088769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.088779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.089024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.089236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.089246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.089552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.089908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.089919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.090291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.090645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.090654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.091022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.091370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.091379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.091719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.091956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.091966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.092286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.092660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.092669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.093023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.093350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.093359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.094327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.094679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.094690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.094964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.095304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.095313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.095627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.095996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.096006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.105 qpair failed and we were unable to recover it. 00:26:13.105 [2024-04-27 00:10:43.096378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.105 [2024-04-27 00:10:43.097258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.097279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.097599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.098619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.098646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.099137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.099516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.099529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.099873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.100225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.100234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.100591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.100958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.100969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.101294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.101637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.101646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.102008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.102379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.102389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.102639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.102928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.102937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.103306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.103519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.103530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.103876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.104224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.104233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.104431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.104793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.104802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.105191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.105550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.105559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.105907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.106202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.106211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.106551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.106905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.106915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.107250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.107605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.107614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.107846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.108204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.108213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.108582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.108929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.108938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.109267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.109623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.109632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.109991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.110353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.110362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.110698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.111065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.111075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.111429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.111784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.111793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.112114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.112469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.112478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.112824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.113036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.113046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.113383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.113542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.113552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.113877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.114239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.114249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.114544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.114904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.114914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.106 qpair failed and we were unable to recover it. 00:26:13.106 [2024-04-27 00:10:43.115247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.106 [2024-04-27 00:10:43.115614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.115623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.115972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.116317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.116327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.116659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.116909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.116919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.117210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.117592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.117601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.117951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.118249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.118258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.118599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.118930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.118939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.119280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.119650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.119660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.120013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.120228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.120237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.120576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.120928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.120938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.121281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.121636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.121645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.121992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.122321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.122330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.122669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.123000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.123009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.123357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.123711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.123720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.124061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.124419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.124428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.124769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.125013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.125022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.125358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.125714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.125724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.126044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.126414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.126423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.126793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.126989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.126999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.127275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.127628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.127636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.127850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.128150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.128159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.128474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.128822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.107 [2024-04-27 00:10:43.128831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.107 qpair failed and we were unable to recover it. 00:26:13.107 [2024-04-27 00:10:43.129030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.129322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.129331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.129701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.130066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.130076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.130441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.130818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.130827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.131150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.131454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.131464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.131797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.132118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.132127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.132480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.132689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.132701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.133037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.133361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.133370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.133714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.134065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.134075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.134441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.134675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.134685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.135035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.135406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.135416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.135775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.136142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.136152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.136498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.136858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.136867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.137066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.137361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.137370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.137712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.137948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.137958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.138331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.138678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.138687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.139035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.139401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.139410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.139762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.140100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.140109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.140459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.140786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.140795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.141123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.141522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.141532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.141855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.142027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.142036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.142415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.142751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.142760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.143087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.143445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.143454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.143643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.143959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.143968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.144304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.144669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.144678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.145018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.145393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.145402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.145821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.146163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.146172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.108 [2024-04-27 00:10:43.146488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.146842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.108 [2024-04-27 00:10:43.146851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.108 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.147206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.147559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.147568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.147912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.148248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.148257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.148608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.148985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.148995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.149352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.149701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.149710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.150078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.150429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.150438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.150805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.151158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.151167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.151521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.151852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.151862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.152180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.152501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.152510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.152876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.153084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.153094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.153438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.153792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.153802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.154001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.154341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.154350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.154683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.155038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.155047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.155412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.155727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.155736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.156081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.156418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.156427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.156734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.157040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.157049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.157415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.157793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.157802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.158144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.158496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.158505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.158877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.159081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.159090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.159410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.159768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.159778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.160192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.160544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.160553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.160788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.161101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.161111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.161446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.161770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.161779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.162119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.162468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.162477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.162841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.163196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.163206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.163549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.163734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.163744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.164074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.164428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.164437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.109 qpair failed and we were unable to recover it. 00:26:13.109 [2024-04-27 00:10:43.164756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.109 [2024-04-27 00:10:43.165092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.165101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.165428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.165793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.165802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.166152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.166482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.166491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.166855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.167080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.167093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.167488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.167852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.167861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.168159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.168473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.168483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.168807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.169137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.169146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.169476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.169671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.169680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.170039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.170393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.170402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.170756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.171103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.171113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.171494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.171818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.171827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.171968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.172306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.172315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.172667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.173016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.173026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.173391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.173649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.173658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.174000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.174324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.174333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.174728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.175026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.175035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.175347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.175714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.175723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.176050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.176372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.176380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.176732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.177089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.177099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.177422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.177766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.177775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.178106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.178474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.178483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.178850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.179168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.179177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.179535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.179899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.179908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.180242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.180589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.180598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.180943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.181266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.181275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.181629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.181923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.181933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.182303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.182679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.110 [2024-04-27 00:10:43.182688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.110 qpair failed and we were unable to recover it. 00:26:13.110 [2024-04-27 00:10:43.183065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.183416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.183424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.183776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.184118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.184128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.184475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.184835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.184849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.185170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.185528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.185537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.185874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.186160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.186170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.186512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.186873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.186882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.187208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.187556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.187565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.187875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.188206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.188215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.188572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.188941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.188950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.189319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.189556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.189565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.189904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.190229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.190238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.190557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.190910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.190920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.191276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.191625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.191634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.191986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.192360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.192369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.192728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.192936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.192946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.193273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.193625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.193634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.193983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.194306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.194314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.194652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.194862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.194872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.195190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.195546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.195555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.195908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.196265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.196274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.196620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.197002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.197012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.197375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.197571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.197581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.197918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.198220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.198229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.198578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.198940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.198949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.199249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.199449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.199458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.199735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.200093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.111 [2024-04-27 00:10:43.200103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.111 qpair failed and we were unable to recover it. 00:26:13.111 [2024-04-27 00:10:43.200449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.200815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.200825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.201207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.201563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.201575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.201872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.202215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.202225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.202588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.202912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.202922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.203262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.203653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.203663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.203852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.204161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.204170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.204524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.204884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.204893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.205222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.205575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.205584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.205939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.206253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.206262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.206453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.206772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.206781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.207007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.207332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.207340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.207674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.208023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.208032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.208345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.208684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.208693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.209008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.209332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.209341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.209673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.209959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.209969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.210318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.210675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.210684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.211034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.211380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.211389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.211742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.212078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.212088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.212320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.212653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.212663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.213024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.213385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.112 [2024-04-27 00:10:43.213394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.112 qpair failed and we were unable to recover it. 00:26:13.112 [2024-04-27 00:10:43.213732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.214092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.214102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.214470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.214816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.214825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.215194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.215553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.215562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.215914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.216277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.216286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.216621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.216999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.217008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.217397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.217743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.217752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.218084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.218446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.218455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.218767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.219099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.219109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.219468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.219789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.219798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.220135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.220331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.220341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.220670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.221015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.221024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.221363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.221614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.221624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.221968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.222335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.222344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.222703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.223060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.223070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.113 [2024-04-27 00:10:43.223423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.223768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.113 [2024-04-27 00:10:43.223777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.113 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.224105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.224453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.224463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.224797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.225151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.225161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.225521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.225865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.225875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.226187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.226548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.226557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.226936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.227274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.227283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.227647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.227858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.227868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.228222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.228579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.228588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.228935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.229241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.229250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.229556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.229902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.229911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.230267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.230606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.230615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.230910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.231215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.231224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.231581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.231937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.231946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.232318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.232651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.232660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.233014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.233338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.233346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.233700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.234049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.234059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 554567 Killed "${NVMF_APP[@]}" "$@" 00:26:13.114 [2024-04-27 00:10:43.234371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.234734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.234744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.234943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 00:10:43 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:13.114 [2024-04-27 00:10:43.235285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.235296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 00:10:43 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:13.114 00:10:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:13.114 [2024-04-27 00:10:43.235656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 00:10:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:13.114 [2024-04-27 00:10:43.235902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.235914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 00:10:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.114 [2024-04-27 00:10:43.236238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.236592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.236601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.236949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.237285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.237294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.237639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.237986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.237995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.238225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.238438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.238447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.238643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.238944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.238954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.239263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.239588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.239597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.239903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.240238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.114 [2024-04-27 00:10:43.240248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.114 qpair failed and we were unable to recover it. 00:26:13.114 [2024-04-27 00:10:43.240612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.240851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.240861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.241176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.241548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.241559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.241777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.242125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.242135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.242315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.242614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.242623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.242945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 00:10:43 -- nvmf/common.sh@470 -- # nvmfpid=555499 00:26:13.115 [2024-04-27 00:10:43.243290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.243300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 00:10:43 -- nvmf/common.sh@471 -- # waitforlisten 555499 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 00:10:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:13.115 [2024-04-27 00:10:43.243633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 00:10:43 -- common/autotest_common.sh@817 -- # '[' -z 555499 ']' 00:26:13.115 00:10:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.115 [2024-04-27 00:10:43.243937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.243947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 00:10:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:13.115 [2024-04-27 00:10:43.244179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 00:10:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.115 [2024-04-27 00:10:43.244422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.244432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 00:10:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:13.115 00:10:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.115 [2024-04-27 00:10:43.244763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.244973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.244983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.245334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.245700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.245709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.246070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.246397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.246408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.246757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.247097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.247106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.247325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.247621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.247630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.247983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.248352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.248361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.248693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.248976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.248986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.249278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.249492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.249502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.249866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.250182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.250191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.250536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.250781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.250790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.250987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.251176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.251186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.251524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.251760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.251769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.251964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.252329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.252338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.252683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.253019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.253029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.253355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.253694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.253703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.254089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.254448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.254458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.254759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.255093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.115 [2024-04-27 00:10:43.255102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.115 qpair failed and we were unable to recover it. 00:26:13.115 [2024-04-27 00:10:43.255313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.255659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.255667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.255891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.256229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.256238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.256612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.256965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.256974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.257275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.257501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.257510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.257865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.258182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.258191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.258529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.258882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.258891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.259275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.259638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.259646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.260003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.260338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.260347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.260689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.260972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.260981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.261166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.261532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.261541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.261901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.262125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.262135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.262370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.262710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.262719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.262901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.263142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.263151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.263481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.263693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.263702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.263906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.264277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.264285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.264617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.265033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.265042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.265381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.265735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.265745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.266005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.266391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.266400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.266769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.267114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.267123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.267492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.267849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.267858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.268253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.268587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.268596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.268791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.269022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.269032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.269349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.269682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.269691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.269939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.270295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.270304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.116 [2024-04-27 00:10:43.270641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.271037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.116 [2024-04-27 00:10:43.271047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.116 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.271363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.271736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.271746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.272129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.272504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.272514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.272721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.272940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.272950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.273298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.273590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.273600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.273981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.274320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.274329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.274557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.274903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.274913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.275123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.275485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.275494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.275877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.276234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.276243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.276453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.276783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.276792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.277059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.277411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.277420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.277746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.278062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.278072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.278440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.278711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.278721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.279040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.279385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.279394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.279581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.279779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.279787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.280143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.280462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.280471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.280758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.280954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.280963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.281434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.281787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.281795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.282145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.282358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.282367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.282587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.282767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.282777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.283145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.283465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.283474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.283848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.284212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.284221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.284577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.284792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.284801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.285304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.285632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.285641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.286054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.286386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.286395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.286760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.287085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.287094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.287452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.287683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.117 [2024-04-27 00:10:43.287692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.117 qpair failed and we were unable to recover it. 00:26:13.117 [2024-04-27 00:10:43.288051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.288235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.288245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.288599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.288964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.288973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.289193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.289544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.289554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.289897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.290286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.290295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.290662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.291043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.291053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.291395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.291716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.291725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.291978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.292319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.292328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.292543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.292919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.292929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.293271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.293566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.293575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.293919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.294169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.294178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.294570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.294882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.294891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.295220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.295554] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:26:13.118 [2024-04-27 00:10:43.295571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.295580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.295598] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.118 [2024-04-27 00:10:43.295947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.296175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.296184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.296395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.296608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.296617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.296992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.297362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.297372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.297716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.298089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.298099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.298322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.298617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.298626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.298997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.299363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.299372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.299711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.300060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.300070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.300425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.300554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.300563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.300755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.301115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.301124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.118 qpair failed and we were unable to recover it. 00:26:13.118 [2024-04-27 00:10:43.301463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.118 [2024-04-27 00:10:43.301571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.301580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.301920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.302264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.302273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.302717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.303037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.303047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.303428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.303780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.303789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.304159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.304488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.304497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.304830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.305182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.305192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.305272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.305567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.305576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.305912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.306137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.306146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.306468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.306827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.306841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.307187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.307499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.307507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.307870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.308165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.308174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.308534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.308868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.308877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.309211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.309543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.309552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.309924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.310094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.310104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.310424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.310783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.310793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.310994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.311290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.311299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.311558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.311763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.311771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.119 [2024-04-27 00:10:43.312133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.312482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.119 [2024-04-27 00:10:43.312491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.119 qpair failed and we were unable to recover it. 00:26:13.387 [2024-04-27 00:10:43.312724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.387 [2024-04-27 00:10:43.313008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.387 [2024-04-27 00:10:43.313017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.387 qpair failed and we were unable to recover it. 00:26:13.387 [2024-04-27 00:10:43.313334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.387 [2024-04-27 00:10:43.313691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.387 [2024-04-27 00:10:43.313700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.387 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.313940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.314299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.314308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.314667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.314895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.314904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.315274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.315624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.315633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.315990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.316318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.316327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.316673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.317026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.317035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.317248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.317570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.317578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.317900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.318079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.318089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.318420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.318781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.318789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.319154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.319365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.319374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.319671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.319876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.319885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.320257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.320462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.320472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.320826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.321162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.321172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.321517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.321850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.321860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.322212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.322571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.322580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.322805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.323146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.323155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.323499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.323863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.323873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.324092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.324330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.324339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.324555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.324898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.324908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.325254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.325618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.325627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.325973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.326340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.326349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.326698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.327051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.327061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.388 [2024-04-27 00:10:43.327375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.327745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.327754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.328080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.328443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.328452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.328759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.329097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.388 [2024-04-27 00:10:43.329107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.388 qpair failed and we were unable to recover it. 00:26:13.388 [2024-04-27 00:10:43.329442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.329748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.329757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.330100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.330461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.330470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.330851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.331189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.331198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.331549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.331869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.331878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.332092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.332364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.332373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.332598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.332898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.332908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.333258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.333630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.333639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.333827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.334163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.334173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.334538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.334897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.334906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.335080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.335452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.335461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.335756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.336042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.336052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.336372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.336729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.336738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.337083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.337334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.337343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.337545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.337926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.337935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.338250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.338459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.338468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.338910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.339243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.339252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.339473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.339647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.339657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.340000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.340371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.340380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.340746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.341059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.341068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.341413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.341787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.341796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.342155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.342509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.342518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.342879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.343229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.343240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.343605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.343818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.343827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.344066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.344410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.344419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.344543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.344872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.344881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.345227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.345547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.345556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.345711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.346034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.346044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.389 qpair failed and we were unable to recover it. 00:26:13.389 [2024-04-27 00:10:43.346414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.389 [2024-04-27 00:10:43.346649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.346658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.347027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.347270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.347279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.347346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.347682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.347692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.348079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.348434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.348443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.348761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.348860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.348871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.349161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.349527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.349535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.349858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.350194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.350203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.350572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.350948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.350958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.351316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.351570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.351579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.351934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.352251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.352260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.352452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.352786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.352795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.353127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.353496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.353505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.353873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.354104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.354113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.354454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.354777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.354786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.355112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.355442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.355451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.355784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.356111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.356121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.356320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.356663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.356672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.357014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.357228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.357237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.357449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.357640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.357649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.358032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.358388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.358397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.358756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.359096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.359106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.359318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.359675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.359685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.360066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.360386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.360395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.360673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.360896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.360906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.361271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.361635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.361644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.362003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.362358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.362367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.390 [2024-04-27 00:10:43.362737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.363141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.390 [2024-04-27 00:10:43.363150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.390 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.363492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.363847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.363856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.364231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.364612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.364621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.364852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.365028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.365037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.365396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.365755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.365764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.366102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.366470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.366480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.366696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.366881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.366890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.367346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.367701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.367709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.368085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.368452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.368461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.368687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.369010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.369019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.369339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.369677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.369686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.370055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.370381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.370390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.370705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.371047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.371056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.371408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.371758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.371767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.371969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.372308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.372317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.372652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.373005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.373015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.373243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.373494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.373503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.373867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.374244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.374253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.374536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.374770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.374779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.375148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.375524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.375533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.375729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.376059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.376069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.376276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.376631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.376640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.377005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.377370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.377379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.377561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.377872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.377881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.378062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.378372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.378380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.378572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.378936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.378946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.379146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.379329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.379338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.379658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.380001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.391 [2024-04-27 00:10:43.380011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.391 qpair failed and we were unable to recover it. 00:26:13.391 [2024-04-27 00:10:43.380366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.380576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.380585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.380936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.381146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.381157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.381499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.381748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.381757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.382103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.382407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.382416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.382798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.383106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.383115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.383456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.383791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.383800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.383963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.384305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.384314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.384673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.384872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.384881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.384981] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.392 [2024-04-27 00:10:43.385216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.385300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.385308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.385640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.386012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.386022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.386391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.386760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.386769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.387106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.387437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.387451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.387661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.388001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.388012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.388214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.388586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.388595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.388692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.389035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.389046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.389260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.389635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.389645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.390006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.390253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.390262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.390708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.391067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.391077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.391320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.391679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.391689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.392031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.392346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.392355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.392700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.392874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.392884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.392 qpair failed and we were unable to recover it. 00:26:13.392 [2024-04-27 00:10:43.393103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.393437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.392 [2024-04-27 00:10:43.393446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.393815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.394167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.394176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.394529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.394858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.394868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.395214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.395387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.395396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.395805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.396021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.396030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.396380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.396687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.396696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.396899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.397090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.397100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.397401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.397585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.397593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.397967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.398186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.398196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.398575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.398904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.398914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.399335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.399709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.399718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.400113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.400441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.400450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.400790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.401134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.401143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.401384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.401738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.401747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.402082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.402477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.402486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.402800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.403114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.403123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.403322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.403676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.403686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.403897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.404286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.404296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.404365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.404683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.404692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.404896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.405282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.405291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.405663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.406024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.406033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.406282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.406613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.406622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.406981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.407347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.407356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.407564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.407767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.407776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.408148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.408332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.408341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.408752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.408848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.408857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.409204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.409510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.409519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.393 qpair failed and we were unable to recover it. 00:26:13.393 [2024-04-27 00:10:43.409883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.393 [2024-04-27 00:10:43.410080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.410090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.410448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.410799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.410808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.411208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.411507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.411517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.411878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.412232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.412240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.412594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.412963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.412973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.413291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.413624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.413634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.413978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.414357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.414366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.414729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.414960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.414970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.415173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.415532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.415541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.415874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.416147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.416156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.416478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.416734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.416743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.417096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.417469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.417479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.417811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.418158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.418169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.418370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.418594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.418602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.418945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.419263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.419274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.419608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.419901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.419912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.420257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.420635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.420644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.421015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.421371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.421380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.421755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.422104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.422114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.422466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.422834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.422847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.423186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.423391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.423400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.423742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.423993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.424002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.424347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.424676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.424685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.424964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.425324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.425333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.425700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.426068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.426082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.426412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.426566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.426576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.394 qpair failed and we were unable to recover it. 00:26:13.394 [2024-04-27 00:10:43.426979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.394 [2024-04-27 00:10:43.427339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.427348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.427711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.427997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.428006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.428344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.428703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.428712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.429054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.429279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.429288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.429705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.430004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.430013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.430228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.430454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.430463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.430827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.431194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.431203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.431397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.431614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.431622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.432027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.432389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.432398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.432765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.433106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.433115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.433492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.433842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.433852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.434236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.434600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.434610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.434976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.435188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.435197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.435550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.435927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.435937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.436121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.436547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.436556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.436764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.437105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.437115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.437462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.437813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.437822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.438158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.438371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.438380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.438754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.439099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.439109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.439439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.439802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.439811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.440174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.440525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.440535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.440906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.441205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.441213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.441534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.441899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.441909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.442242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.442570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.442579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.395 qpair failed and we were unable to recover it. 00:26:13.395 [2024-04-27 00:10:43.442949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.395 [2024-04-27 00:10:43.443290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.443299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.443661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.443910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.443919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.444303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.444680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.444688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.444888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.445237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.445247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.445593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.445833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.445844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.446221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.446582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.446591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.446951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.447322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.447332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.447675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.448048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.448057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.448420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.448786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.448795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.449119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.449551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.449560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.449598] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.396 [2024-04-27 00:10:43.449626] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.396 [2024-04-27 00:10:43.449633] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.396 [2024-04-27 00:10:43.449639] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.396 [2024-04-27 00:10:43.449645] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.396 [2024-04-27 00:10:43.449819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:13.396 [2024-04-27 00:10:43.449942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.449964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:13.396 [2024-04-27 00:10:43.450087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:13.396 [2024-04-27 00:10:43.450087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:13.396 [2024-04-27 00:10:43.450264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.450273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.450617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.450843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.450853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.451180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.451511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.451520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.451920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.452305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.452314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.452696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.452958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.452968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.453334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.453658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.453667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.453934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.454149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.454159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.454503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.454875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.454885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.455131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.455476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.455485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.455830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.456266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.456275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.456471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.456798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.456807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.457182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.457560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.457569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.396 [2024-04-27 00:10:43.457937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.458142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.396 [2024-04-27 00:10:43.458151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.396 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.458358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.458620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.458629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.459016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.459358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.459368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.459733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.460083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.460092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.460448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.460807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.460816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.461194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.461550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.461559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.461926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.462275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.462285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.462490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.462724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.462733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.463138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.463390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.463398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.463761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.464004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.464014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.464226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.464597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.464606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.464801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.465136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.465146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.465459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.465622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.465630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.465963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.466270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.466279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.466601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.466936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.466946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.467176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.467386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.467395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.467811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.468059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.468068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.468429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.468782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.468791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.469138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.469500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.469509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.469863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.470252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.470261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.470467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.470723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.470733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.471082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.471445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.471455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.471820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.472031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.472041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.472247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.472343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.472351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.472688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.473065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.473074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.473289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.473493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.473501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.397 [2024-04-27 00:10:43.473843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.474226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.397 [2024-04-27 00:10:43.474236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.397 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.474591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.474648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.474658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.474960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.475165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.475174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.475528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.475740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.475749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.476195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.476519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.476528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.476828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.477032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.477042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.477221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.477535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.477545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.477714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.478167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.478177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.478520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.478635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.478644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.478829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.479240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.479249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.479607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.479809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.479818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.480169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.480496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.480505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.480835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.481060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.481069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.481413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.481562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.481571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.481898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.482099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.482107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.482326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.482666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.482675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.483036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.483238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.483247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.483596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.483818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.483828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.484173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.484530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.484539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.484883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.485129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.485137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.485324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.485551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.485561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.398 qpair failed and we were unable to recover it. 00:26:13.398 [2024-04-27 00:10:43.485908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.398 [2024-04-27 00:10:43.486207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.486216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.486572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.486891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.486901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.487116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.487465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.487474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.487835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.488185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.488194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.488508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.488874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.488884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.489224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.489435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.489444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.489620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.489938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.489947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.490259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.490626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.490635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.490988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.491348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.491357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.491711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.492133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.492143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.492477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.492851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.492861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.493189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.493392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.493401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.493748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.494068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.494077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.494279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.494459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.494468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.494691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.495023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.495032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.495359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.495677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.495686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.495829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.495923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.495932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.496308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.496667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.496676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.497044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.497435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.497444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.497789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.497844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.497852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.498205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.498572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.498581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.498788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.499202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.499212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.499575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.499901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.499911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.500278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.500533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.500542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.500764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.501017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.501026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.501235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.501560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.501571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.501946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.502194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.399 [2024-04-27 00:10:43.502203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.399 qpair failed and we were unable to recover it. 00:26:13.399 [2024-04-27 00:10:43.502450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.502853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.502862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.503203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.503576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.503585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.503917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.504119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.504127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.504502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.504591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.504599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.504773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.504931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.504939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.505274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.505494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.505502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.505874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.506223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.506233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.506471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.506800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.506809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.507162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.507530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.507541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.507899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.508305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.508314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.508511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.508820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.508829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.509024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.509412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.509421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.509734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.510058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.510067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.510405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.510582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.510591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.510797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.511143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.511152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.511477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.511795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.511804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.512011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.512391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.512400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.512529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.512875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.512885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.513292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.513602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.513611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.513812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.514068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.514078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.514232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.514542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.514550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.514866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.515241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.515250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.515562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.515876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.515886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.516226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.516447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.516456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.516821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.517174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.517183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.400 qpair failed and we were unable to recover it. 00:26:13.400 [2024-04-27 00:10:43.517520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.400 [2024-04-27 00:10:43.517841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.517850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.517995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.518298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.518306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.518600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.518815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.518824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.519142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.519527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.519536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.519904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.520282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.520291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.520666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.520992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.521001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.521364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.521669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.521678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.522034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.522250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.522259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.522474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.522835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.522847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.523148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.523370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.523379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.523685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.524009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.524019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.524281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.524634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.524643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.524843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.525172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.525180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.525396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.525582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.525590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.525941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.526268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.526277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.526343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.526655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.526664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.527019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.527421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.527430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.527803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.528007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.528018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.528242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.528575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.528584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.528806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.529040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.529049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.529419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.529784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.529793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.530011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.530221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.530230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.530586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.530810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.530819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.531071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.531441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.531450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.531803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.532180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.532192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.532386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.532600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.532609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.401 qpair failed and we were unable to recover it. 00:26:13.401 [2024-04-27 00:10:43.532937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.533281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.401 [2024-04-27 00:10:43.533290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.533524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.533856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.533865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.534101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.534318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.534326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.534533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.534829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.534840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.535202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.535516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.535525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.535916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.536093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.536102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.536356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.536550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.536558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.536867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.537253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.537261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.537356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.537415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.537426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.537769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.538103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.538113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.538346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.538736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.538744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.539088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.539292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.539301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.539650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.540038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.540048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.540233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.540302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.540310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.540651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.541009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.541018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.541236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.541429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.541438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.541751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.541847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.541856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.542228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.542612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.542621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.542942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.543256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.543265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.543625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.543935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.543944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.543994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.544278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.544287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.544495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.544887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.544896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.545160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.545535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.545544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.545737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.546074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.546083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.546148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.546361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.546370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.546716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.547080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.547089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.547451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.547827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.547836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.402 [2024-04-27 00:10:43.548195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.548546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.402 [2024-04-27 00:10:43.548555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.402 qpair failed and we were unable to recover it. 00:26:13.403 [2024-04-27 00:10:43.548923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.403 [2024-04-27 00:10:43.549258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.403 [2024-04-27 00:10:43.549267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.403 qpair failed and we were unable to recover it. 00:26:13.403 [2024-04-27 00:10:43.549603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.403 [2024-04-27 00:10:43.549776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.403 [2024-04-27 00:10:43.549786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.403 qpair failed and we were unable to recover it. 00:26:13.403 [2024-04-27 00:10:43.550103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.403 [2024-04-27 00:10:43.550419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.403 [2024-04-27 00:10:43.550428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.550624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.550819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.550828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.551166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.551517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.551526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.551729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.552055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.552064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.552368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.552698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.552706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.552928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.553241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.553250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 00:10:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:13.404 [2024-04-27 00:10:43.553452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 00:10:43 -- common/autotest_common.sh@850 -- # return 0 00:26:13.404 [2024-04-27 00:10:43.553867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.553877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 00:10:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:13.404 00:10:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:13.404 [2024-04-27 00:10:43.554250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 00:10:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.404 [2024-04-27 00:10:43.554618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.554627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.554933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.555352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.555363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.555561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.555864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.555873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.556268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.556477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.556486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.556533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.556883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.556893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.557210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.557523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.557532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.557735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.557972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.557981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.558217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.558582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.558591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.558958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.559295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.559306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.559636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.559873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.559883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.560260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.560456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.560465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.560690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.561001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.561014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.561363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.561736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.561745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.562069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.562394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.562404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.562602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.562976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.562985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.563340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.563703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.563712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.563908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.564222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.564232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.564426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.564802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.564811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.565164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.565528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.404 [2024-04-27 00:10:43.565537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.404 qpair failed and we were unable to recover it. 00:26:13.404 [2024-04-27 00:10:43.565892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.566075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.566085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.566457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.566782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.566791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.567116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.567481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.567491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.567692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.567884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.567893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.568077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.568396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.568405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.568620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.568929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.568938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.568997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.569327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.569337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.569528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.569832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.569846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.570278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.570512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.570521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.570865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.571184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.571192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.571568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.571931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.571940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.572304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.572625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.572634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.572983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.573361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.573371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.573737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.573944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.573954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.574310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.574666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.574675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.575024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.575389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.575398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.575772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.576106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.576116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.576455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.576775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.576783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.576978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.577326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.577335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.577691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.578037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.578046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.578413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.578737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.578746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.579123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.579455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.579464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.579828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.580188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.580197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.580556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.580921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.580930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.581246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.581388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.581396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.581463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.581809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.581818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.582213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.582409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.582418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.405 qpair failed and we were unable to recover it. 00:26:13.405 [2024-04-27 00:10:43.582597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.405 [2024-04-27 00:10:43.582852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.582862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.583213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.583573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.583582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.583959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.584335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.584344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.584691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.584924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.584934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.585300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.585669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.585679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.585886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.586202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.586212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.586550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.586767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.586776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.586992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.587316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.587325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.587641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.587992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.588002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.588374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.588591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.588602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.588920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.589111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.589122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.589181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.589385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.589394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.589583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.589834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.589848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.590133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.590445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.590454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.590827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.591165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.591174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.591505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.591893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.591903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.592221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.592506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.592517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 00:10:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.406 [2024-04-27 00:10:43.592733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.592963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.592975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 00:10:43 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:13.406 00:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.406 [2024-04-27 00:10:43.593355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 00:10:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.406 [2024-04-27 00:10:43.593730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.593740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.594088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.594287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.594296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.594653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.595008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.595018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.595385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.595748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.595757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.595942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.596273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.596282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.596621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.596987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.596996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.597340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.597689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.597698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.598063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.598424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.598432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.406 qpair failed and we were unable to recover it. 00:26:13.406 [2024-04-27 00:10:43.598624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.599003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.406 [2024-04-27 00:10:43.599012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.407 qpair failed and we were unable to recover it. 00:26:13.407 [2024-04-27 00:10:43.599362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.690 [2024-04-27 00:10:43.599716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.690 [2024-04-27 00:10:43.599727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.690 qpair failed and we were unable to recover it. 00:26:13.690 [2024-04-27 00:10:43.600183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.690 [2024-04-27 00:10:43.600516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.690 [2024-04-27 00:10:43.600525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.690 qpair failed and we were unable to recover it. 00:26:13.690 [2024-04-27 00:10:43.600786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.690 [2024-04-27 00:10:43.601007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.690 [2024-04-27 00:10:43.601017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.690 qpair failed and we were unable to recover it. 00:26:13.690 [2024-04-27 00:10:43.601234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.690 [2024-04-27 00:10:43.601641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.690 [2024-04-27 00:10:43.601651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.690 qpair failed and we were unable to recover it. 00:26:13.690 [2024-04-27 00:10:43.602009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.690 [2024-04-27 00:10:43.602333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.690 [2024-04-27 00:10:43.602342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.690 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.602716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.603076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.603085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.603437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.603818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.603827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.604152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.604521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.604530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.604901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.605229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.605238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.605464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.605780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.605789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.605991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.606362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.606371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.606711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.607072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.607081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.607438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.607666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.607675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.607908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.608236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.608246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.608540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.608789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.608798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.609157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.609358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.609367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.609713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.610065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.610074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.610289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.610629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.610639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.611000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 Malloc0 00:26:13.691 [2024-04-27 00:10:43.611332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.611341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.611632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.611974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 00:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.691 [2024-04-27 00:10:43.611984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.612039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 00:10:43 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:13.691 [2024-04-27 00:10:43.612393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.612402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 00:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.691 [2024-04-27 00:10:43.612708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 00:10:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.691 [2024-04-27 00:10:43.613085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.613094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.613378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.613704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.613712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.613975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.614292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.614301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.614658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.614860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.691 [2024-04-27 00:10:43.614870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.691 qpair failed and we were unable to recover it. 00:26:13.691 [2024-04-27 00:10:43.615199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.615521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.615530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.615900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.616244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.616252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.616453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.616804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.616812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.617234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.617565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.617573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.617943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.618265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.618274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.618642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.618690] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.692 [2024-04-27 00:10:43.618977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.618987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.619308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.619660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.619669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.619824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.620147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.620156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.620543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.620757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.620765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.620950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.621264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.621273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.621668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.622054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.622063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.622412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.622768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.622777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.623130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.623457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.623466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.623793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.624136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.624146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.624476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.624851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.624860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.625180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.625534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.625543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.625850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.626218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.626227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.626289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.626623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.626631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.626977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.627236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.627245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.627490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 00:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.692 [2024-04-27 00:10:43.627690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.627699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.627920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 00:10:43 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:13.692 [2024-04-27 00:10:43.628237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.628247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 00:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.692 00:10:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.692 [2024-04-27 00:10:43.628609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.628960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.628969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.629188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.629563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.629572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.629924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.630266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.630275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.630526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.630828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.630840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.692 qpair failed and we were unable to recover it. 00:26:13.692 [2024-04-27 00:10:43.631057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.631403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.692 [2024-04-27 00:10:43.631412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.631745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.632044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.632054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.632425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.632622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.632632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.632866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.633124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.633133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.633516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.633876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.633886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.633947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.634280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.634289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.634653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.634864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.634874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.635247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.635601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.635610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.635961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.636190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.636198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.636550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.636877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.636886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.637233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.637586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.637595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.638042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.638320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.638329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.638669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.639017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.639026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.639421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 00:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.693 [2024-04-27 00:10:43.639775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.639784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 00:10:43 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:13.693 [2024-04-27 00:10:43.640112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.640314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.640323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 00:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.693 00:10:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.693 [2024-04-27 00:10:43.640662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.641024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.641033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.641380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.641692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.641701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.641925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.642222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.642231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.642595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.642954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.642964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.643169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.643467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.643475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.643789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.644101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.644110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.644455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.644833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.644845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.645218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.645524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.645532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.645901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.646230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.646239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.646603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.646958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.646967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.647315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.647672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.693 [2024-04-27 00:10:43.647681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.693 qpair failed and we were unable to recover it. 00:26:13.693 [2024-04-27 00:10:43.648015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.648332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.648341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.648685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.649088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.649097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.649493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.649844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.649853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.650233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.650592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.650601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.651079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.651308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.651321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.651395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 00:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.694 [2024-04-27 00:10:43.651772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.651787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 00:10:43 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.694 [2024-04-27 00:10:43.652165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.652424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.652433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 00:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.694 00:10:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.694 [2024-04-27 00:10:43.652777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.653169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.653179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.653575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.653781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.653789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.653971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.654333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.654344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.654714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.655071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.655081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.655435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.655792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.655805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.656049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.656375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.656384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.656731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.656890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.656899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.657152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.657517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.657526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.657884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.658285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.658294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.658700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.658843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.694 [2024-04-27 00:10:43.658852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1254650 with addr=10.0.0.2, port=4420 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.658983] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.694 00:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.694 00:10:43 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:13.694 00:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.694 00:10:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.694 [2024-04-27 00:10:43.669600] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.694 [2024-04-27 00:10:43.669678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.694 [2024-04-27 00:10:43.669697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.694 [2024-04-27 00:10:43.669705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.694 [2024-04-27 00:10:43.669712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.694 [2024-04-27 00:10:43.669732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 00:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.694 00:10:43 -- host/target_disconnect.sh@58 -- # wait 554621 00:26:13.694 [2024-04-27 00:10:43.679413] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.694 [2024-04-27 00:10:43.679483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.694 [2024-04-27 00:10:43.679500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.694 [2024-04-27 00:10:43.679510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.694 [2024-04-27 00:10:43.679516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.694 [2024-04-27 00:10:43.679531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.689534] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.694 [2024-04-27 00:10:43.689620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.694 [2024-04-27 00:10:43.689636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.694 [2024-04-27 00:10:43.689644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.694 [2024-04-27 00:10:43.689650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.694 [2024-04-27 00:10:43.689664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.694 qpair failed and we were unable to recover it. 00:26:13.694 [2024-04-27 00:10:43.699540] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.694 [2024-04-27 00:10:43.699605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.694 [2024-04-27 00:10:43.699620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.694 [2024-04-27 00:10:43.699627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.695 [2024-04-27 00:10:43.699633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.695 [2024-04-27 00:10:43.699647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.695 qpair failed and we were unable to recover it. 00:26:13.695 [2024-04-27 00:10:43.709517] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.695 [2024-04-27 00:10:43.709592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.695 [2024-04-27 00:10:43.709608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.695 [2024-04-27 00:10:43.709614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.695 [2024-04-27 00:10:43.709621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.695 [2024-04-27 00:10:43.709634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.695 qpair failed and we were unable to recover it. 00:26:13.695 [2024-04-27 00:10:43.719534] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.695 [2024-04-27 00:10:43.719598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.695 [2024-04-27 00:10:43.719613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.695 [2024-04-27 00:10:43.719620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.695 [2024-04-27 00:10:43.719627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.695 [2024-04-27 00:10:43.719641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.695 qpair failed and we were unable to recover it. 00:26:13.695 [2024-04-27 00:10:43.729558] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.695 [2024-04-27 00:10:43.729619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.695 [2024-04-27 00:10:43.729635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.695 [2024-04-27 00:10:43.729642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.695 [2024-04-27 00:10:43.729648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.695 [2024-04-27 00:10:43.729661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.695 qpair failed and we were unable to recover it. 00:26:13.695 [2024-04-27 00:10:43.739564] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.695 [2024-04-27 00:10:43.739625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.695 [2024-04-27 00:10:43.739641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.695 [2024-04-27 00:10:43.739648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.695 [2024-04-27 00:10:43.739654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.695 [2024-04-27 00:10:43.739667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.695 qpair failed and we were unable to recover it. 00:26:13.695 [2024-04-27 00:10:43.749608] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.695 [2024-04-27 00:10:43.749676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.695 [2024-04-27 00:10:43.749690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.695 [2024-04-27 00:10:43.749697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.695 [2024-04-27 00:10:43.749703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.695 [2024-04-27 00:10:43.749717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.695 qpair failed and we were unable to recover it. 00:26:13.695 [2024-04-27 00:10:43.759642] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.695 [2024-04-27 00:10:43.759734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.695 [2024-04-27 00:10:43.759750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.695 [2024-04-27 00:10:43.759758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.695 [2024-04-27 00:10:43.759764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.695 [2024-04-27 00:10:43.759778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.695 qpair failed and we were unable to recover it. 00:26:13.695 [2024-04-27 00:10:43.769535] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.695 [2024-04-27 00:10:43.769593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.695 [2024-04-27 00:10:43.769611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.695 [2024-04-27 00:10:43.769618] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.695 [2024-04-27 00:10:43.769625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.695 [2024-04-27 00:10:43.769638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.695 qpair failed and we were unable to recover it. 00:26:13.695 [2024-04-27 00:10:43.779684] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.695 [2024-04-27 00:10:43.779747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.695 [2024-04-27 00:10:43.779762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.695 [2024-04-27 00:10:43.779769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.695 [2024-04-27 00:10:43.779775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.695 [2024-04-27 00:10:43.779788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.695 qpair failed and we were unable to recover it. 00:26:13.695 [2024-04-27 00:10:43.789761] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.695 [2024-04-27 00:10:43.789832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.695 [2024-04-27 00:10:43.789851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.695 [2024-04-27 00:10:43.789858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.695 [2024-04-27 00:10:43.789864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.695 [2024-04-27 00:10:43.789879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.695 qpair failed and we were unable to recover it. 00:26:13.695 [2024-04-27 00:10:43.799654] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.695 [2024-04-27 00:10:43.799722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.695 [2024-04-27 00:10:43.799737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.695 [2024-04-27 00:10:43.799744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.695 [2024-04-27 00:10:43.799750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.695 [2024-04-27 00:10:43.799764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.695 qpair failed and we were unable to recover it. 00:26:13.695 [2024-04-27 00:10:43.809695] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.695 [2024-04-27 00:10:43.809754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.696 [2024-04-27 00:10:43.809769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.696 [2024-04-27 00:10:43.809776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.696 [2024-04-27 00:10:43.809782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.696 [2024-04-27 00:10:43.809795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.696 qpair failed and we were unable to recover it. 00:26:13.696 [2024-04-27 00:10:43.819804] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.696 [2024-04-27 00:10:43.819865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.696 [2024-04-27 00:10:43.819880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.696 [2024-04-27 00:10:43.819887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.696 [2024-04-27 00:10:43.819893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.696 [2024-04-27 00:10:43.819907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.696 qpair failed and we were unable to recover it. 00:26:13.696 [2024-04-27 00:10:43.829853] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.696 [2024-04-27 00:10:43.829919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.696 [2024-04-27 00:10:43.829934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.696 [2024-04-27 00:10:43.829941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.696 [2024-04-27 00:10:43.829947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.696 [2024-04-27 00:10:43.829961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.696 qpair failed and we were unable to recover it. 00:26:13.696 [2024-04-27 00:10:43.839945] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.696 [2024-04-27 00:10:43.840006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.696 [2024-04-27 00:10:43.840021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.696 [2024-04-27 00:10:43.840028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.696 [2024-04-27 00:10:43.840034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.696 [2024-04-27 00:10:43.840047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.696 qpair failed and we were unable to recover it. 00:26:13.696 [2024-04-27 00:10:43.849916] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.696 [2024-04-27 00:10:43.850058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.696 [2024-04-27 00:10:43.850074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.696 [2024-04-27 00:10:43.850082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.696 [2024-04-27 00:10:43.850088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.696 [2024-04-27 00:10:43.850102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.696 qpair failed and we were unable to recover it. 00:26:13.696 [2024-04-27 00:10:43.859907] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.696 [2024-04-27 00:10:43.860005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.696 [2024-04-27 00:10:43.860025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.696 [2024-04-27 00:10:43.860033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.696 [2024-04-27 00:10:43.860039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.696 [2024-04-27 00:10:43.860052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.696 qpair failed and we were unable to recover it. 00:26:13.696 [2024-04-27 00:10:43.869952] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.696 [2024-04-27 00:10:43.870023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.696 [2024-04-27 00:10:43.870038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.696 [2024-04-27 00:10:43.870045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.696 [2024-04-27 00:10:43.870051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.696 [2024-04-27 00:10:43.870065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.696 qpair failed and we were unable to recover it. 00:26:13.696 [2024-04-27 00:10:43.880031] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.696 [2024-04-27 00:10:43.880112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.696 [2024-04-27 00:10:43.880127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.696 [2024-04-27 00:10:43.880135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.696 [2024-04-27 00:10:43.880141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.696 [2024-04-27 00:10:43.880154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.696 qpair failed and we were unable to recover it. 00:26:13.696 [2024-04-27 00:10:43.890018] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.696 [2024-04-27 00:10:43.890081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.696 [2024-04-27 00:10:43.890096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.696 [2024-04-27 00:10:43.890103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.696 [2024-04-27 00:10:43.890109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.696 [2024-04-27 00:10:43.890123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.696 qpair failed and we were unable to recover it. 00:26:13.696 [2024-04-27 00:10:43.900014] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.696 [2024-04-27 00:10:43.900075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.696 [2024-04-27 00:10:43.900090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.696 [2024-04-27 00:10:43.900097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.696 [2024-04-27 00:10:43.900104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.696 [2024-04-27 00:10:43.900121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.696 qpair failed and we were unable to recover it. 00:26:13.957 [2024-04-27 00:10:43.910066] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.957 [2024-04-27 00:10:43.910141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.957 [2024-04-27 00:10:43.910156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.957 [2024-04-27 00:10:43.910163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.957 [2024-04-27 00:10:43.910169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.957 [2024-04-27 00:10:43.910182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.957 qpair failed and we were unable to recover it. 00:26:13.957 [2024-04-27 00:10:43.920083] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.957 [2024-04-27 00:10:43.920146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.957 [2024-04-27 00:10:43.920161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.957 [2024-04-27 00:10:43.920168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.957 [2024-04-27 00:10:43.920175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.957 [2024-04-27 00:10:43.920188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.957 qpair failed and we were unable to recover it. 00:26:13.957 [2024-04-27 00:10:43.930118] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.957 [2024-04-27 00:10:43.930185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.957 [2024-04-27 00:10:43.930201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.957 [2024-04-27 00:10:43.930208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.957 [2024-04-27 00:10:43.930214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.957 [2024-04-27 00:10:43.930229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.957 qpair failed and we were unable to recover it. 00:26:13.957 [2024-04-27 00:10:43.940135] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.957 [2024-04-27 00:10:43.940208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.957 [2024-04-27 00:10:43.940223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.957 [2024-04-27 00:10:43.940230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.957 [2024-04-27 00:10:43.940236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.957 [2024-04-27 00:10:43.940249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.957 qpair failed and we were unable to recover it. 00:26:13.957 [2024-04-27 00:10:43.950147] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.957 [2024-04-27 00:10:43.950217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.957 [2024-04-27 00:10:43.950236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.957 [2024-04-27 00:10:43.950243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.957 [2024-04-27 00:10:43.950249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:43.950262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:43.960170] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:43.960231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:43.960247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:43.960258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:43.960265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:43.960279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:43.970223] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:43.970283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:43.970298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:43.970306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:43.970312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:43.970326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:43.980253] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:43.980332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:43.980347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:43.980355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:43.980361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:43.980374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:43.990285] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:43.990349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:43.990364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:43.990371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:43.990377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:43.990394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.000322] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.000379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.000394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.000402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.000408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.000421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.010353] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.010457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.010472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.010480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.010486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.010500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.020365] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.020469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.020484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.020492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.020498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.020512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.030388] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.030504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.030519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.030527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.030534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.030547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.040377] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.040435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.040453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.040460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.040467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.040480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.050425] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.050492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.050507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.050514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.050520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.050533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.060351] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.060415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.060431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.060438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.060445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.060459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.070479] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.070540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.070555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.070563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.070569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.070582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.080488] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.080554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.080580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.080589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.080601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.080619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.090544] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.090657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.090683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.090692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.090699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.090716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.100547] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.100608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.100625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.100633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.100639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.100653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.110594] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.110656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.110671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.110679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.110685] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.110698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.120613] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.120677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.120692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.120699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.120706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.120719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.130656] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.130718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.130734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.130742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.130748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.130762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.140680] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.140741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.140756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.140764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.140770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.140784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.150714] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.150780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.150795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.150802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.150808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.150822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.160712] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.160780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.160796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.160803] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.160809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.160823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:13.958 [2024-04-27 00:10:44.170744] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.958 [2024-04-27 00:10:44.170803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.958 [2024-04-27 00:10:44.170818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.958 [2024-04-27 00:10:44.170825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.958 [2024-04-27 00:10:44.170836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:13.958 [2024-04-27 00:10:44.170853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.958 qpair failed and we were unable to recover it. 00:26:14.219 [2024-04-27 00:10:44.180784] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.219 [2024-04-27 00:10:44.180890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.219 [2024-04-27 00:10:44.180906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.219 [2024-04-27 00:10:44.180914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.219 [2024-04-27 00:10:44.180920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.219 [2024-04-27 00:10:44.180934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.219 qpair failed and we were unable to recover it. 00:26:14.219 [2024-04-27 00:10:44.190820] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.219 [2024-04-27 00:10:44.190893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.219 [2024-04-27 00:10:44.190909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.219 [2024-04-27 00:10:44.190916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.219 [2024-04-27 00:10:44.190922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.219 [2024-04-27 00:10:44.190936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.219 qpair failed and we were unable to recover it. 00:26:14.219 [2024-04-27 00:10:44.200843] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.219 [2024-04-27 00:10:44.200908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.219 [2024-04-27 00:10:44.200924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.219 [2024-04-27 00:10:44.200931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.219 [2024-04-27 00:10:44.200937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.219 [2024-04-27 00:10:44.200952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.219 qpair failed and we were unable to recover it. 00:26:14.219 [2024-04-27 00:10:44.210880] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.219 [2024-04-27 00:10:44.210943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.219 [2024-04-27 00:10:44.210958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.219 [2024-04-27 00:10:44.210965] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.219 [2024-04-27 00:10:44.210971] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.219 [2024-04-27 00:10:44.210984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.219 qpair failed and we were unable to recover it. 00:26:14.219 [2024-04-27 00:10:44.220910] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.219 [2024-04-27 00:10:44.220994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.219 [2024-04-27 00:10:44.221009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.219 [2024-04-27 00:10:44.221017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.219 [2024-04-27 00:10:44.221023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.219 [2024-04-27 00:10:44.221037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.219 qpair failed and we were unable to recover it. 00:26:14.220 [2024-04-27 00:10:44.230937] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.220 [2024-04-27 00:10:44.231009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.220 [2024-04-27 00:10:44.231025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.220 [2024-04-27 00:10:44.231032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.220 [2024-04-27 00:10:44.231038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.220 [2024-04-27 00:10:44.231052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.220 qpair failed and we were unable to recover it. 00:26:14.220 [2024-04-27 00:10:44.240984] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.220 [2024-04-27 00:10:44.241047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.220 [2024-04-27 00:10:44.241062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.220 [2024-04-27 00:10:44.241069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.220 [2024-04-27 00:10:44.241075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.220 [2024-04-27 00:10:44.241089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.220 qpair failed and we were unable to recover it. 00:26:14.220 [2024-04-27 00:10:44.250986] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.220 [2024-04-27 00:10:44.251046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.220 [2024-04-27 00:10:44.251061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.220 [2024-04-27 00:10:44.251069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.220 [2024-04-27 00:10:44.251075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.220 [2024-04-27 00:10:44.251088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.220 qpair failed and we were unable to recover it. 00:26:14.220 [2024-04-27 00:10:44.261105] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.220 [2024-04-27 00:10:44.261173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.220 [2024-04-27 00:10:44.261188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.220 [2024-04-27 00:10:44.261196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.220 [2024-04-27 00:10:44.261206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.220 [2024-04-27 00:10:44.261220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.220 qpair failed and we were unable to recover it. 00:26:14.220 [2024-04-27 00:10:44.271040] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.220 [2024-04-27 00:10:44.271107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.220 [2024-04-27 00:10:44.271122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.220 [2024-04-27 00:10:44.271129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.220 [2024-04-27 00:10:44.271135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.220 [2024-04-27 00:10:44.271148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.220 qpair failed and we were unable to recover it. 00:26:14.220 [2024-04-27 00:10:44.280954] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.220 [2024-04-27 00:10:44.281013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.220 [2024-04-27 00:10:44.281027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.220 [2024-04-27 00:10:44.281034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.220 [2024-04-27 00:10:44.281041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.220 [2024-04-27 00:10:44.281054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.220 qpair failed and we were unable to recover it. 00:26:14.220 [2024-04-27 00:10:44.290972] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.220 [2024-04-27 00:10:44.291034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.220 [2024-04-27 00:10:44.291049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.220 [2024-04-27 00:10:44.291056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.220 [2024-04-27 00:10:44.291062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.220 [2024-04-27 00:10:44.291075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.220 qpair failed and we were unable to recover it. 00:26:14.220 [2024-04-27 00:10:44.301139] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.220 [2024-04-27 00:10:44.301200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.220 [2024-04-27 00:10:44.301215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.220 [2024-04-27 00:10:44.301223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.220 [2024-04-27 00:10:44.301229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:14.220 [2024-04-27 00:10:44.301243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.220 qpair failed and we were unable to recover it. 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Write completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Write completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Write completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Write completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Write completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Write completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Write completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Read completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Write completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 Write completed with error (sct=0, sc=8) 00:26:14.220 starting I/O failed 00:26:14.220 [2024-04-27 00:10:44.301454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.220 [2024-04-27 00:10:44.311076] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.220 [2024-04-27 00:10:44.311137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.220 [2024-04-27 00:10:44.311152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.220 [2024-04-27 00:10:44.311158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.220 [2024-04-27 00:10:44.311163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.220 [2024-04-27 00:10:44.311175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.220 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.321085] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.321142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.321155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.321160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.321165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.321177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.331199] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.331250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.331265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.331270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.331274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.331285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.341244] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.341297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.341309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.341314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.341319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.341329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.351266] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.351348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.351359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.351364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.351369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.351379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.361185] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.361236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.361249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.361254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.361258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.361269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.371315] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.371365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.371376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.371381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.371388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.371399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.381385] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.381447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.381458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.381463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.381468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.381478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.391363] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.391423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.391434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.391439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.391443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.391453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.401457] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.401508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.401520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.401525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.401530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.401539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.411421] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.411513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.411525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.411530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.411534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.411544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.421432] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.421491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.421502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.421507] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.421511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.421521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.221 [2024-04-27 00:10:44.431478] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.221 [2024-04-27 00:10:44.431533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.221 [2024-04-27 00:10:44.431545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.221 [2024-04-27 00:10:44.431550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.221 [2024-04-27 00:10:44.431555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.221 [2024-04-27 00:10:44.431565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.221 qpair failed and we were unable to recover it. 00:26:14.482 [2024-04-27 00:10:44.441522] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.482 [2024-04-27 00:10:44.441603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.482 [2024-04-27 00:10:44.441614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.482 [2024-04-27 00:10:44.441619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.482 [2024-04-27 00:10:44.441624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.482 [2024-04-27 00:10:44.441634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.482 qpair failed and we were unable to recover it. 00:26:14.482 [2024-04-27 00:10:44.451537] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.482 [2024-04-27 00:10:44.451592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.482 [2024-04-27 00:10:44.451603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.482 [2024-04-27 00:10:44.451608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.482 [2024-04-27 00:10:44.451613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.482 [2024-04-27 00:10:44.451623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.482 qpair failed and we were unable to recover it. 00:26:14.482 [2024-04-27 00:10:44.461651] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.482 [2024-04-27 00:10:44.461716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.482 [2024-04-27 00:10:44.461727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.482 [2024-04-27 00:10:44.461732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.482 [2024-04-27 00:10:44.461740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.482 [2024-04-27 00:10:44.461750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.482 qpair failed and we were unable to recover it. 00:26:14.482 [2024-04-27 00:10:44.471643] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.482 [2024-04-27 00:10:44.471701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.482 [2024-04-27 00:10:44.471712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.482 [2024-04-27 00:10:44.471718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.482 [2024-04-27 00:10:44.471723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.482 [2024-04-27 00:10:44.471733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.482 qpair failed and we were unable to recover it. 00:26:14.482 [2024-04-27 00:10:44.481680] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.482 [2024-04-27 00:10:44.481740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.482 [2024-04-27 00:10:44.481750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.482 [2024-04-27 00:10:44.481758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.482 [2024-04-27 00:10:44.481762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.482 [2024-04-27 00:10:44.481772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.482 qpair failed and we were unable to recover it. 00:26:14.482 [2024-04-27 00:10:44.491719] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.491776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.491787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.491792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.491797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.491807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.501690] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.501750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.501761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.501766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.501771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.501781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.511695] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.511750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.511761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.511767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.511771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.511781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.521708] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.521759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.521770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.521775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.521780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.521790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.531773] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.531830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.531846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.531852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.531857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.531869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.541669] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.541724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.541735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.541740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.541744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.541755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.551714] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.551772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.551784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.551792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.551797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.551808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.561834] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.561897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.561909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.561914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.561919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.561930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.571860] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.571927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.571940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.571945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.571950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.571961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.581893] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.581946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.581957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.581963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.581967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.581978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.591934] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.592027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.592037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.592042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.592047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.592057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.601955] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.602006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.602017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.602022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.602026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.602037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.612065] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.612118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.612128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.612133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.612138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.612148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.622036] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.483 [2024-04-27 00:10:44.622090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.483 [2024-04-27 00:10:44.622101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.483 [2024-04-27 00:10:44.622106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.483 [2024-04-27 00:10:44.622110] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.483 [2024-04-27 00:10:44.622120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.483 qpair failed and we were unable to recover it. 00:26:14.483 [2024-04-27 00:10:44.632046] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.484 [2024-04-27 00:10:44.632140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.484 [2024-04-27 00:10:44.632151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.484 [2024-04-27 00:10:44.632157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.484 [2024-04-27 00:10:44.632162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.484 [2024-04-27 00:10:44.632172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.484 qpair failed and we were unable to recover it. 00:26:14.484 [2024-04-27 00:10:44.642113] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.484 [2024-04-27 00:10:44.642178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.484 [2024-04-27 00:10:44.642192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.484 [2024-04-27 00:10:44.642197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.484 [2024-04-27 00:10:44.642201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.484 [2024-04-27 00:10:44.642211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.484 qpair failed and we were unable to recover it. 00:26:14.484 [2024-04-27 00:10:44.652072] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.484 [2024-04-27 00:10:44.652124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.484 [2024-04-27 00:10:44.652135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.484 [2024-04-27 00:10:44.652140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.484 [2024-04-27 00:10:44.652145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.484 [2024-04-27 00:10:44.652155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.484 qpair failed and we were unable to recover it. 00:26:14.484 [2024-04-27 00:10:44.662011] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.484 [2024-04-27 00:10:44.662065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.484 [2024-04-27 00:10:44.662076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.484 [2024-04-27 00:10:44.662081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.484 [2024-04-27 00:10:44.662085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.484 [2024-04-27 00:10:44.662096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.484 qpair failed and we were unable to recover it. 00:26:14.484 [2024-04-27 00:10:44.672157] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.484 [2024-04-27 00:10:44.672211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.484 [2024-04-27 00:10:44.672222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.484 [2024-04-27 00:10:44.672227] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.484 [2024-04-27 00:10:44.672232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.484 [2024-04-27 00:10:44.672242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.484 qpair failed and we were unable to recover it. 00:26:14.484 [2024-04-27 00:10:44.682194] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.484 [2024-04-27 00:10:44.682296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.484 [2024-04-27 00:10:44.682307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.484 [2024-04-27 00:10:44.682313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.484 [2024-04-27 00:10:44.682317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.484 [2024-04-27 00:10:44.682330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.484 qpair failed and we were unable to recover it. 00:26:14.484 [2024-04-27 00:10:44.692212] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.484 [2024-04-27 00:10:44.692264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.484 [2024-04-27 00:10:44.692275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.484 [2024-04-27 00:10:44.692280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.484 [2024-04-27 00:10:44.692285] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.484 [2024-04-27 00:10:44.692295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.484 qpair failed and we were unable to recover it. 00:26:14.746 [2024-04-27 00:10:44.702109] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.746 [2024-04-27 00:10:44.702162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.746 [2024-04-27 00:10:44.702174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.746 [2024-04-27 00:10:44.702179] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.746 [2024-04-27 00:10:44.702183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.746 [2024-04-27 00:10:44.702194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.746 qpair failed and we were unable to recover it. 00:26:14.746 [2024-04-27 00:10:44.712267] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.746 [2024-04-27 00:10:44.712322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.746 [2024-04-27 00:10:44.712333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.746 [2024-04-27 00:10:44.712338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.746 [2024-04-27 00:10:44.712343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.746 [2024-04-27 00:10:44.712353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.746 qpair failed and we were unable to recover it. 00:26:14.746 [2024-04-27 00:10:44.722283] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.746 [2024-04-27 00:10:44.722337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.746 [2024-04-27 00:10:44.722348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.746 [2024-04-27 00:10:44.722353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.746 [2024-04-27 00:10:44.722358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.746 [2024-04-27 00:10:44.722368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.746 qpair failed and we were unable to recover it. 00:26:14.746 [2024-04-27 00:10:44.732325] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.746 [2024-04-27 00:10:44.732374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.746 [2024-04-27 00:10:44.732388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.746 [2024-04-27 00:10:44.732393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.746 [2024-04-27 00:10:44.732398] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.746 [2024-04-27 00:10:44.732408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.746 qpair failed and we were unable to recover it. 00:26:14.746 [2024-04-27 00:10:44.742348] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.746 [2024-04-27 00:10:44.742401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.746 [2024-04-27 00:10:44.742412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.746 [2024-04-27 00:10:44.742417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.746 [2024-04-27 00:10:44.742422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.746 [2024-04-27 00:10:44.742432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.746 qpair failed and we were unable to recover it. 00:26:14.746 [2024-04-27 00:10:44.752377] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.746 [2024-04-27 00:10:44.752468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.746 [2024-04-27 00:10:44.752479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.746 [2024-04-27 00:10:44.752485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.746 [2024-04-27 00:10:44.752490] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.746 [2024-04-27 00:10:44.752500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.746 qpair failed and we were unable to recover it. 00:26:14.746 [2024-04-27 00:10:44.762287] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.746 [2024-04-27 00:10:44.762373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.746 [2024-04-27 00:10:44.762383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.746 [2024-04-27 00:10:44.762389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.746 [2024-04-27 00:10:44.762394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.746 [2024-04-27 00:10:44.762404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.746 qpair failed and we were unable to recover it. 00:26:14.746 [2024-04-27 00:10:44.772379] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.772427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.772438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.772443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.772450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.772460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.782474] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.782567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.782578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.782583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.782588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.782598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.792358] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.792426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.792437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.792442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.792446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.792456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.802383] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.802434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.802445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.802450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.802454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.802464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.812555] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.812625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.812636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.812640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.812645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.812655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.822548] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.822613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.822632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.822639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.822644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.822657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.832583] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.832651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.832670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.832677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.832682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.832695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.842594] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.842646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.842658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.842664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.842669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.842680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.852658] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.852710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.852721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.852727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.852731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.852742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.862686] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.862739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.862750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.862755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.862763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.862773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.872695] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.872755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.872767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.872772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.872776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.872786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.882755] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.882806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.882817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.747 [2024-04-27 00:10:44.882823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.747 [2024-04-27 00:10:44.882827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.747 [2024-04-27 00:10:44.882840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.747 qpair failed and we were unable to recover it. 00:26:14.747 [2024-04-27 00:10:44.892780] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.747 [2024-04-27 00:10:44.892831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.747 [2024-04-27 00:10:44.892845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.748 [2024-04-27 00:10:44.892851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.748 [2024-04-27 00:10:44.892855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.748 [2024-04-27 00:10:44.892866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.748 qpair failed and we were unable to recover it. 00:26:14.748 [2024-04-27 00:10:44.902853] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.748 [2024-04-27 00:10:44.902929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.748 [2024-04-27 00:10:44.902940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.748 [2024-04-27 00:10:44.902945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.748 [2024-04-27 00:10:44.902950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.748 [2024-04-27 00:10:44.902960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.748 qpair failed and we were unable to recover it. 00:26:14.748 [2024-04-27 00:10:44.912945] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.748 [2024-04-27 00:10:44.913020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.748 [2024-04-27 00:10:44.913031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.748 [2024-04-27 00:10:44.913036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.748 [2024-04-27 00:10:44.913041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.748 [2024-04-27 00:10:44.913051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.748 qpair failed and we were unable to recover it. 00:26:14.748 [2024-04-27 00:10:44.922869] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.748 [2024-04-27 00:10:44.922919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.748 [2024-04-27 00:10:44.922930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.748 [2024-04-27 00:10:44.922935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.748 [2024-04-27 00:10:44.922939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.748 [2024-04-27 00:10:44.922949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.748 qpair failed and we were unable to recover it. 00:26:14.748 [2024-04-27 00:10:44.932870] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.748 [2024-04-27 00:10:44.932923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.748 [2024-04-27 00:10:44.932935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.748 [2024-04-27 00:10:44.932940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.748 [2024-04-27 00:10:44.932944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.748 [2024-04-27 00:10:44.932955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.748 qpair failed and we were unable to recover it. 00:26:14.748 [2024-04-27 00:10:44.942920] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.748 [2024-04-27 00:10:44.942970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.748 [2024-04-27 00:10:44.942981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.748 [2024-04-27 00:10:44.942986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.748 [2024-04-27 00:10:44.942990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.748 [2024-04-27 00:10:44.943000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.748 qpair failed and we were unable to recover it. 00:26:14.748 [2024-04-27 00:10:44.952964] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.748 [2024-04-27 00:10:44.953021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.748 [2024-04-27 00:10:44.953032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.748 [2024-04-27 00:10:44.953042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.748 [2024-04-27 00:10:44.953047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.748 [2024-04-27 00:10:44.953057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.748 qpair failed and we were unable to recover it. 00:26:14.748 [2024-04-27 00:10:44.962979] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.748 [2024-04-27 00:10:44.963029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.748 [2024-04-27 00:10:44.963041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.748 [2024-04-27 00:10:44.963046] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.748 [2024-04-27 00:10:44.963050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:14.748 [2024-04-27 00:10:44.963061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.748 qpair failed and we were unable to recover it. 00:26:15.009 [2024-04-27 00:10:44.972983] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:44.973035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:44.973046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:44.973051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:44.973056] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:44.973066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:44.983054] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:44.983110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:44.983121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:44.983126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:44.983130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:44.983141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:44.992939] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:44.992998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:44.993010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:44.993015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:44.993020] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:44.993031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:45.003087] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:45.003144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:45.003156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:45.003161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:45.003166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:45.003176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:45.013106] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:45.013161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:45.013173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:45.013178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:45.013183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:45.013193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:45.023029] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:45.023084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:45.023095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:45.023101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:45.023105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:45.023115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:45.033148] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:45.033216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:45.033227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:45.033233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:45.033237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:45.033248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:45.043197] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:45.043247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:45.043261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:45.043266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:45.043270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:45.043280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:45.053204] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:45.053256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:45.053267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:45.053272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:45.053277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:45.053287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:45.063245] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:45.063301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:45.063312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:45.063317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:45.063322] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:45.063332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:45.073285] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:45.073344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:45.073355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:45.073360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:45.073364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:45.073375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:45.083304] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:45.083356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:45.083368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:45.083373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.010 [2024-04-27 00:10:45.083378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.010 [2024-04-27 00:10:45.083391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.010 qpair failed and we were unable to recover it. 00:26:15.010 [2024-04-27 00:10:45.093217] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.010 [2024-04-27 00:10:45.093289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.010 [2024-04-27 00:10:45.093301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.010 [2024-04-27 00:10:45.093306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.093310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.093320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.103382] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.103435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.103446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.103451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.103455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.103465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.113394] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.113451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.113462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.113467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.113472] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.113482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.123418] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.123472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.123483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.123489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.123493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.123503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.133492] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.133541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.133555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.133560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.133564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.133574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.143370] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.143428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.143439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.143444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.143449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.143459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.153414] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.153469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.153480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.153485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.153490] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.153500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.163539] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.163621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.163631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.163636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.163641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.163652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.173569] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.173629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.173640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.173645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.173649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.173663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.183601] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.183663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.183682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.183688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.183695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.183708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.193635] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.193692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.193712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.193718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.193723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.193736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.203615] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.203672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.203685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.203690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.203694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.203705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.213544] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.011 [2024-04-27 00:10:45.213596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.011 [2024-04-27 00:10:45.213608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.011 [2024-04-27 00:10:45.213613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.011 [2024-04-27 00:10:45.213618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.011 [2024-04-27 00:10:45.213628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.011 qpair failed and we were unable to recover it. 00:26:15.011 [2024-04-27 00:10:45.223710] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.012 [2024-04-27 00:10:45.223769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.012 [2024-04-27 00:10:45.223780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.012 [2024-04-27 00:10:45.223786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.012 [2024-04-27 00:10:45.223790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.012 [2024-04-27 00:10:45.223800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.012 qpair failed and we were unable to recover it. 00:26:15.273 [2024-04-27 00:10:45.233736] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.273 [2024-04-27 00:10:45.233793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.273 [2024-04-27 00:10:45.233805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.273 [2024-04-27 00:10:45.233810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.273 [2024-04-27 00:10:45.233814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.273 [2024-04-27 00:10:45.233825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.273 qpair failed and we were unable to recover it. 00:26:15.273 [2024-04-27 00:10:45.243760] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.273 [2024-04-27 00:10:45.243809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.273 [2024-04-27 00:10:45.243820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.273 [2024-04-27 00:10:45.243825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.273 [2024-04-27 00:10:45.243830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.273 [2024-04-27 00:10:45.243842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.273 qpair failed and we were unable to recover it. 00:26:15.273 [2024-04-27 00:10:45.253749] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.273 [2024-04-27 00:10:45.253800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.273 [2024-04-27 00:10:45.253811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.273 [2024-04-27 00:10:45.253816] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.273 [2024-04-27 00:10:45.253821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.273 [2024-04-27 00:10:45.253831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.273 qpair failed and we were unable to recover it. 00:26:15.273 [2024-04-27 00:10:45.263694] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.273 [2024-04-27 00:10:45.263749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.273 [2024-04-27 00:10:45.263761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.273 [2024-04-27 00:10:45.263766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.273 [2024-04-27 00:10:45.263774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.273 [2024-04-27 00:10:45.263784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.273 qpair failed and we were unable to recover it. 00:26:15.273 [2024-04-27 00:10:45.273709] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.273 [2024-04-27 00:10:45.273771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.273 [2024-04-27 00:10:45.273783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.273 [2024-04-27 00:10:45.273788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.273 [2024-04-27 00:10:45.273792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.273802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.283827] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.283877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.283888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.283893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.283898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.283908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.293922] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.293972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.293983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.293988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.293993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.294003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.303940] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.303991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.304002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.304007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.304011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.304021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.313971] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.314034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.314046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.314051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.314055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.314065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.323993] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.324043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.324054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.324060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.324064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.324075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.334016] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.334070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.334081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.334086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.334091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.334101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.344056] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.344138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.344149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.344154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.344158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.344169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.353971] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.354085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.354100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.354108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.354113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.354127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.364108] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.364162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.364174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.364179] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.364183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.364194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.374112] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.374164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.374175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.374181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.374185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.374196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.384194] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.384267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.384279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.384284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.384289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.384299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.394209] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.394273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.274 [2024-04-27 00:10:45.394283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.274 [2024-04-27 00:10:45.394289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.274 [2024-04-27 00:10:45.394294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.274 [2024-04-27 00:10:45.394304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.274 qpair failed and we were unable to recover it. 00:26:15.274 [2024-04-27 00:10:45.404280] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.274 [2024-04-27 00:10:45.404331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.275 [2024-04-27 00:10:45.404342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.275 [2024-04-27 00:10:45.404347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.275 [2024-04-27 00:10:45.404352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.275 [2024-04-27 00:10:45.404362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.275 qpair failed and we were unable to recover it. 00:26:15.275 [2024-04-27 00:10:45.414114] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.275 [2024-04-27 00:10:45.414165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.275 [2024-04-27 00:10:45.414176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.275 [2024-04-27 00:10:45.414181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.275 [2024-04-27 00:10:45.414186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.275 [2024-04-27 00:10:45.414196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.275 qpair failed and we were unable to recover it. 00:26:15.275 [2024-04-27 00:10:45.424268] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.275 [2024-04-27 00:10:45.424323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.275 [2024-04-27 00:10:45.424334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.275 [2024-04-27 00:10:45.424339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.275 [2024-04-27 00:10:45.424344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.275 [2024-04-27 00:10:45.424354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.275 qpair failed and we were unable to recover it. 00:26:15.275 [2024-04-27 00:10:45.434277] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.275 [2024-04-27 00:10:45.434333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.275 [2024-04-27 00:10:45.434345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.275 [2024-04-27 00:10:45.434350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.275 [2024-04-27 00:10:45.434354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.275 [2024-04-27 00:10:45.434364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.275 qpair failed and we were unable to recover it. 00:26:15.275 [2024-04-27 00:10:45.444369] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.275 [2024-04-27 00:10:45.444438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.275 [2024-04-27 00:10:45.444449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.275 [2024-04-27 00:10:45.444456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.275 [2024-04-27 00:10:45.444461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.275 [2024-04-27 00:10:45.444471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.275 qpair failed and we were unable to recover it. 00:26:15.275 [2024-04-27 00:10:45.454337] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.275 [2024-04-27 00:10:45.454390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.275 [2024-04-27 00:10:45.454401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.275 [2024-04-27 00:10:45.454406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.275 [2024-04-27 00:10:45.454410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.275 [2024-04-27 00:10:45.454420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.275 qpair failed and we were unable to recover it. 00:26:15.275 [2024-04-27 00:10:45.464368] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.275 [2024-04-27 00:10:45.464435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.275 [2024-04-27 00:10:45.464446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.275 [2024-04-27 00:10:45.464451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.275 [2024-04-27 00:10:45.464455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.275 [2024-04-27 00:10:45.464465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.275 qpair failed and we were unable to recover it. 00:26:15.275 [2024-04-27 00:10:45.474274] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.275 [2024-04-27 00:10:45.474342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.275 [2024-04-27 00:10:45.474353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.275 [2024-04-27 00:10:45.474359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.275 [2024-04-27 00:10:45.474363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.275 [2024-04-27 00:10:45.474373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.275 qpair failed and we were unable to recover it. 00:26:15.275 [2024-04-27 00:10:45.484450] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.275 [2024-04-27 00:10:45.484549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.275 [2024-04-27 00:10:45.484561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.275 [2024-04-27 00:10:45.484567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.275 [2024-04-27 00:10:45.484572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.275 [2024-04-27 00:10:45.484582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.275 qpair failed and we were unable to recover it. 00:26:15.536 [2024-04-27 00:10:45.494462] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.536 [2024-04-27 00:10:45.494520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.536 [2024-04-27 00:10:45.494531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.536 [2024-04-27 00:10:45.494536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.536 [2024-04-27 00:10:45.494541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.536 [2024-04-27 00:10:45.494551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.536 qpair failed and we were unable to recover it. 00:26:15.536 [2024-04-27 00:10:45.504496] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.536 [2024-04-27 00:10:45.504548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.536 [2024-04-27 00:10:45.504559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.536 [2024-04-27 00:10:45.504564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.536 [2024-04-27 00:10:45.504569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.536 [2024-04-27 00:10:45.504579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.536 qpair failed and we were unable to recover it. 00:26:15.536 [2024-04-27 00:10:45.514523] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.536 [2024-04-27 00:10:45.514576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.536 [2024-04-27 00:10:45.514587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.536 [2024-04-27 00:10:45.514592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.536 [2024-04-27 00:10:45.514597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.536 [2024-04-27 00:10:45.514607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.536 qpair failed and we were unable to recover it. 00:26:15.536 [2024-04-27 00:10:45.524519] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.536 [2024-04-27 00:10:45.524567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.536 [2024-04-27 00:10:45.524578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.536 [2024-04-27 00:10:45.524583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.536 [2024-04-27 00:10:45.524588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.536 [2024-04-27 00:10:45.524598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.536 qpair failed and we were unable to recover it. 00:26:15.536 [2024-04-27 00:10:45.534557] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.536 [2024-04-27 00:10:45.534646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.536 [2024-04-27 00:10:45.534661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.536 [2024-04-27 00:10:45.534666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.536 [2024-04-27 00:10:45.534670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.536 [2024-04-27 00:10:45.534681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.536 qpair failed and we were unable to recover it. 00:26:15.536 [2024-04-27 00:10:45.544592] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.536 [2024-04-27 00:10:45.544641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.536 [2024-04-27 00:10:45.544653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.536 [2024-04-27 00:10:45.544658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.544662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.544672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.554601] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.554659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.554670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.554676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.554680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.554691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.564638] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.564690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.564701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.564706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.564711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.564721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.574680] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.574728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.574739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.574744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.574749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.574761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.584671] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.584729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.584740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.584745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.584750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.584760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.594742] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.594796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.594807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.594813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.594817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.594827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.604683] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.604785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.604798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.604803] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.604808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.604818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.614792] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.614848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.614861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.614866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.614870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.614882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.624793] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.624862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.624876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.624881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.624886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.624896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.634873] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.634934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.634945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.634951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.634955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.634965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.644873] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.644924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.644935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.644941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.644946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.644956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.654952] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.655004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.655015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.655020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.655025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.655035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.664950] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.665002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.665013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.665018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.665025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.665035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.674976] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.537 [2024-04-27 00:10:45.675034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.537 [2024-04-27 00:10:45.675045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.537 [2024-04-27 00:10:45.675051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.537 [2024-04-27 00:10:45.675055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.537 [2024-04-27 00:10:45.675065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.537 qpair failed and we were unable to recover it. 00:26:15.537 [2024-04-27 00:10:45.684997] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.538 [2024-04-27 00:10:45.685047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.538 [2024-04-27 00:10:45.685058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.538 [2024-04-27 00:10:45.685063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.538 [2024-04-27 00:10:45.685068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.538 [2024-04-27 00:10:45.685078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.538 qpair failed and we were unable to recover it. 00:26:15.538 [2024-04-27 00:10:45.695064] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.538 [2024-04-27 00:10:45.695116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.538 [2024-04-27 00:10:45.695127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.538 [2024-04-27 00:10:45.695132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.538 [2024-04-27 00:10:45.695137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.538 [2024-04-27 00:10:45.695147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.538 qpair failed and we were unable to recover it. 00:26:15.538 [2024-04-27 00:10:45.704953] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.538 [2024-04-27 00:10:45.705044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.538 [2024-04-27 00:10:45.705058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.538 [2024-04-27 00:10:45.705064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.538 [2024-04-27 00:10:45.705068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.538 [2024-04-27 00:10:45.705079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.538 qpair failed and we were unable to recover it. 00:26:15.538 [2024-04-27 00:10:45.715088] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.538 [2024-04-27 00:10:45.715148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.538 [2024-04-27 00:10:45.715160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.538 [2024-04-27 00:10:45.715165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.538 [2024-04-27 00:10:45.715170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.538 [2024-04-27 00:10:45.715180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.538 qpair failed and we were unable to recover it. 00:26:15.538 [2024-04-27 00:10:45.724984] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.538 [2024-04-27 00:10:45.725042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.538 [2024-04-27 00:10:45.725054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.538 [2024-04-27 00:10:45.725059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.538 [2024-04-27 00:10:45.725064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.538 [2024-04-27 00:10:45.725074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.538 qpair failed and we were unable to recover it. 00:26:15.538 [2024-04-27 00:10:45.735115] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.538 [2024-04-27 00:10:45.735178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.538 [2024-04-27 00:10:45.735190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.538 [2024-04-27 00:10:45.735195] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.538 [2024-04-27 00:10:45.735199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.538 [2024-04-27 00:10:45.735209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.538 qpair failed and we were unable to recover it. 00:26:15.538 [2024-04-27 00:10:45.745047] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.538 [2024-04-27 00:10:45.745113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.538 [2024-04-27 00:10:45.745125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.538 [2024-04-27 00:10:45.745130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.538 [2024-04-27 00:10:45.745134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.538 [2024-04-27 00:10:45.745144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.538 qpair failed and we were unable to recover it. 00:26:15.538 [2024-04-27 00:10:45.755185] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.538 [2024-04-27 00:10:45.755240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.538 [2024-04-27 00:10:45.755252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.538 [2024-04-27 00:10:45.755262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.538 [2024-04-27 00:10:45.755267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.799 [2024-04-27 00:10:45.755279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.799 qpair failed and we were unable to recover it. 00:26:15.799 [2024-04-27 00:10:45.765225] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.799 [2024-04-27 00:10:45.765280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.799 [2024-04-27 00:10:45.765291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.799 [2024-04-27 00:10:45.765296] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.799 [2024-04-27 00:10:45.765301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.799 [2024-04-27 00:10:45.765311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.799 qpair failed and we were unable to recover it. 00:26:15.799 [2024-04-27 00:10:45.775263] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.799 [2024-04-27 00:10:45.775314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.799 [2024-04-27 00:10:45.775327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.799 [2024-04-27 00:10:45.775332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.799 [2024-04-27 00:10:45.775337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.799 [2024-04-27 00:10:45.775348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.799 qpair failed and we were unable to recover it. 00:26:15.799 [2024-04-27 00:10:45.785181] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.799 [2024-04-27 00:10:45.785278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.799 [2024-04-27 00:10:45.785289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.799 [2024-04-27 00:10:45.785295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.799 [2024-04-27 00:10:45.785299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.799 [2024-04-27 00:10:45.785309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.799 qpair failed and we were unable to recover it. 00:26:15.799 [2024-04-27 00:10:45.795311] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.799 [2024-04-27 00:10:45.795364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.799 [2024-04-27 00:10:45.795375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.799 [2024-04-27 00:10:45.795380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.799 [2024-04-27 00:10:45.795385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.799 [2024-04-27 00:10:45.795395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.799 qpair failed and we were unable to recover it. 00:26:15.799 [2024-04-27 00:10:45.805338] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.799 [2024-04-27 00:10:45.805387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.799 [2024-04-27 00:10:45.805398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.799 [2024-04-27 00:10:45.805404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.799 [2024-04-27 00:10:45.805408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.799 [2024-04-27 00:10:45.805418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.799 qpair failed and we were unable to recover it. 00:26:15.799 [2024-04-27 00:10:45.815353] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.799 [2024-04-27 00:10:45.815406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.799 [2024-04-27 00:10:45.815418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.799 [2024-04-27 00:10:45.815423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.799 [2024-04-27 00:10:45.815427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.799 [2024-04-27 00:10:45.815437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.799 qpair failed and we were unable to recover it. 00:26:15.799 [2024-04-27 00:10:45.825364] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.799 [2024-04-27 00:10:45.825424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.799 [2024-04-27 00:10:45.825435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.799 [2024-04-27 00:10:45.825440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.799 [2024-04-27 00:10:45.825445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.799 [2024-04-27 00:10:45.825455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.799 qpair failed and we were unable to recover it. 00:26:15.799 [2024-04-27 00:10:45.835396] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.799 [2024-04-27 00:10:45.835456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.799 [2024-04-27 00:10:45.835468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.799 [2024-04-27 00:10:45.835473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.799 [2024-04-27 00:10:45.835478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.799 [2024-04-27 00:10:45.835488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.799 qpair failed and we were unable to recover it. 00:26:15.799 [2024-04-27 00:10:45.845438] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.799 [2024-04-27 00:10:45.845523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.799 [2024-04-27 00:10:45.845534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.799 [2024-04-27 00:10:45.845543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.799 [2024-04-27 00:10:45.845547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.799 [2024-04-27 00:10:45.845558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.799 qpair failed and we were unable to recover it. 00:26:15.799 [2024-04-27 00:10:45.855497] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.855569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.855580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.855586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.855591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.855601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.800 qpair failed and we were unable to recover it. 00:26:15.800 [2024-04-27 00:10:45.865499] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.865598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.865610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.865615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.865620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.865630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.800 qpair failed and we were unable to recover it. 00:26:15.800 [2024-04-27 00:10:45.875472] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.875533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.875544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.875549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.875554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.875564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.800 qpair failed and we were unable to recover it. 00:26:15.800 [2024-04-27 00:10:45.885546] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.885596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.885607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.885613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.885617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.885627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.800 qpair failed and we were unable to recover it. 00:26:15.800 [2024-04-27 00:10:45.895556] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.895608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.895619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.895624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.895628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.895638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.800 qpair failed and we were unable to recover it. 00:26:15.800 [2024-04-27 00:10:45.905594] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.905647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.905658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.905663] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.905667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.905677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.800 qpair failed and we were unable to recover it. 00:26:15.800 [2024-04-27 00:10:45.915655] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.915708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.915719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.915724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.915728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.915738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.800 qpair failed and we were unable to recover it. 00:26:15.800 [2024-04-27 00:10:45.925614] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.925666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.925678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.925682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.925687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.925697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.800 qpair failed and we were unable to recover it. 00:26:15.800 [2024-04-27 00:10:45.935702] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.935773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.935787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.935793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.935797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.935807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.800 qpair failed and we were unable to recover it. 00:26:15.800 [2024-04-27 00:10:45.945696] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.945751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.945762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.945767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.945772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.945782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.800 qpair failed and we were unable to recover it. 00:26:15.800 [2024-04-27 00:10:45.955611] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.955668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.955679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.955684] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.955689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.955699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.800 qpair failed and we were unable to recover it. 00:26:15.800 [2024-04-27 00:10:45.965760] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.800 [2024-04-27 00:10:45.965812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.800 [2024-04-27 00:10:45.965823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.800 [2024-04-27 00:10:45.965828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.800 [2024-04-27 00:10:45.965833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.800 [2024-04-27 00:10:45.965847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.801 qpair failed and we were unable to recover it. 00:26:15.801 [2024-04-27 00:10:45.975796] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.801 [2024-04-27 00:10:45.975845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.801 [2024-04-27 00:10:45.975856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.801 [2024-04-27 00:10:45.975862] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.801 [2024-04-27 00:10:45.975866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.801 [2024-04-27 00:10:45.975879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.801 qpair failed and we were unable to recover it. 00:26:15.801 [2024-04-27 00:10:45.985827] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.801 [2024-04-27 00:10:45.985927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.801 [2024-04-27 00:10:45.985938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.801 [2024-04-27 00:10:45.985944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.801 [2024-04-27 00:10:45.985949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.801 [2024-04-27 00:10:45.985959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.801 qpair failed and we were unable to recover it. 00:26:15.801 [2024-04-27 00:10:45.995754] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.801 [2024-04-27 00:10:45.995814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.801 [2024-04-27 00:10:45.995825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.801 [2024-04-27 00:10:45.995830] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.801 [2024-04-27 00:10:45.995834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.801 [2024-04-27 00:10:45.995848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.801 qpair failed and we were unable to recover it. 00:26:15.801 [2024-04-27 00:10:46.005876] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.801 [2024-04-27 00:10:46.005931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.801 [2024-04-27 00:10:46.005942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.801 [2024-04-27 00:10:46.005947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.801 [2024-04-27 00:10:46.005952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.801 [2024-04-27 00:10:46.005962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.801 qpair failed and we were unable to recover it. 00:26:15.801 [2024-04-27 00:10:46.015888] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.801 [2024-04-27 00:10:46.015945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.801 [2024-04-27 00:10:46.015956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.801 [2024-04-27 00:10:46.015961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.801 [2024-04-27 00:10:46.015965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:15.801 [2024-04-27 00:10:46.015975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:15.801 qpair failed and we were unable to recover it. 00:26:16.061 [2024-04-27 00:10:46.025901] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.061 [2024-04-27 00:10:46.025957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.061 [2024-04-27 00:10:46.025972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.061 [2024-04-27 00:10:46.025977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.061 [2024-04-27 00:10:46.025981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.061 [2024-04-27 00:10:46.025991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.061 qpair failed and we were unable to recover it. 00:26:16.061 [2024-04-27 00:10:46.035853] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.061 [2024-04-27 00:10:46.035917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.061 [2024-04-27 00:10:46.035928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.061 [2024-04-27 00:10:46.035933] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.061 [2024-04-27 00:10:46.035938] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.061 [2024-04-27 00:10:46.035948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.061 qpair failed and we were unable to recover it. 00:26:16.061 [2024-04-27 00:10:46.046000] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.061 [2024-04-27 00:10:46.046054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.061 [2024-04-27 00:10:46.046065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.061 [2024-04-27 00:10:46.046071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.061 [2024-04-27 00:10:46.046075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.061 [2024-04-27 00:10:46.046087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.061 qpair failed and we were unable to recover it. 00:26:16.061 [2024-04-27 00:10:46.056021] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.061 [2024-04-27 00:10:46.056076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.061 [2024-04-27 00:10:46.056087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.061 [2024-04-27 00:10:46.056092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.061 [2024-04-27 00:10:46.056097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.061 [2024-04-27 00:10:46.056107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.061 qpair failed and we were unable to recover it. 00:26:16.061 [2024-04-27 00:10:46.066090] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.061 [2024-04-27 00:10:46.066143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.061 [2024-04-27 00:10:46.066154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.061 [2024-04-27 00:10:46.066159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.061 [2024-04-27 00:10:46.066166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.061 [2024-04-27 00:10:46.066176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.061 qpair failed and we were unable to recover it. 00:26:16.061 [2024-04-27 00:10:46.076094] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.061 [2024-04-27 00:10:46.076150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.061 [2024-04-27 00:10:46.076162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.061 [2024-04-27 00:10:46.076167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.061 [2024-04-27 00:10:46.076172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.061 [2024-04-27 00:10:46.076182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.061 qpair failed and we were unable to recover it. 00:26:16.061 [2024-04-27 00:10:46.086082] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.061 [2024-04-27 00:10:46.086133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.061 [2024-04-27 00:10:46.086144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.061 [2024-04-27 00:10:46.086149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.061 [2024-04-27 00:10:46.086153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.061 [2024-04-27 00:10:46.086163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.061 qpair failed and we were unable to recover it. 00:26:16.061 [2024-04-27 00:10:46.096106] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.061 [2024-04-27 00:10:46.096154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.096165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.096170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.096174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.096184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.106063] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.106114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.106125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.106130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.106135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.106145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.116171] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.116236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.116248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.116252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.116257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.116267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.126094] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.126157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.126169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.126174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.126178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.126188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.136296] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.136346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.136357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.136362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.136366] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.136377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.146262] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.146321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.146332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.146337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.146342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.146352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.156268] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.156327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.156338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.156343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.156350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.156360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.166331] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.166383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.166396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.166402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.166406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.166418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.176365] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.176414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.176426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.176431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.176435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.176446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.186460] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.186518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.186530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.186535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.186539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.186550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.196424] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.196482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.196493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.196498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.196503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.196513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.206440] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.206490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.206502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.206507] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.206511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.062 [2024-04-27 00:10:46.206521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.062 qpair failed and we were unable to recover it. 00:26:16.062 [2024-04-27 00:10:46.216483] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.062 [2024-04-27 00:10:46.216536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.062 [2024-04-27 00:10:46.216547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.062 [2024-04-27 00:10:46.216553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.062 [2024-04-27 00:10:46.216557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.063 [2024-04-27 00:10:46.216568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.063 qpair failed and we were unable to recover it. 00:26:16.063 [2024-04-27 00:10:46.226508] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.063 [2024-04-27 00:10:46.226561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.063 [2024-04-27 00:10:46.226572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.063 [2024-04-27 00:10:46.226577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.063 [2024-04-27 00:10:46.226581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.063 [2024-04-27 00:10:46.226591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.063 qpair failed and we were unable to recover it. 00:26:16.063 [2024-04-27 00:10:46.236526] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.063 [2024-04-27 00:10:46.236578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.063 [2024-04-27 00:10:46.236589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.063 [2024-04-27 00:10:46.236594] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.063 [2024-04-27 00:10:46.236598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.063 [2024-04-27 00:10:46.236608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.063 qpair failed and we were unable to recover it. 00:26:16.063 [2024-04-27 00:10:46.246553] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.063 [2024-04-27 00:10:46.246604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.063 [2024-04-27 00:10:46.246615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.063 [2024-04-27 00:10:46.246623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.063 [2024-04-27 00:10:46.246627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.063 [2024-04-27 00:10:46.246637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.063 qpair failed and we were unable to recover it. 00:26:16.063 [2024-04-27 00:10:46.256575] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.063 [2024-04-27 00:10:46.256633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.063 [2024-04-27 00:10:46.256644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.063 [2024-04-27 00:10:46.256649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.063 [2024-04-27 00:10:46.256653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.063 [2024-04-27 00:10:46.256663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.063 qpair failed and we were unable to recover it. 00:26:16.063 [2024-04-27 00:10:46.266615] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.063 [2024-04-27 00:10:46.266701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.063 [2024-04-27 00:10:46.266712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.063 [2024-04-27 00:10:46.266717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.063 [2024-04-27 00:10:46.266723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.063 [2024-04-27 00:10:46.266733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.063 qpair failed and we were unable to recover it. 00:26:16.063 [2024-04-27 00:10:46.276627] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.063 [2024-04-27 00:10:46.276682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.063 [2024-04-27 00:10:46.276694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.063 [2024-04-27 00:10:46.276700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.063 [2024-04-27 00:10:46.276706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.063 [2024-04-27 00:10:46.276718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.063 qpair failed and we were unable to recover it. 00:26:16.324 [2024-04-27 00:10:46.286658] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.324 [2024-04-27 00:10:46.286714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.324 [2024-04-27 00:10:46.286725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.324 [2024-04-27 00:10:46.286731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.324 [2024-04-27 00:10:46.286735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.324 [2024-04-27 00:10:46.286745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.324 qpair failed and we were unable to recover it. 00:26:16.324 [2024-04-27 00:10:46.296698] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.324 [2024-04-27 00:10:46.296747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.324 [2024-04-27 00:10:46.296758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.324 [2024-04-27 00:10:46.296764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.324 [2024-04-27 00:10:46.296768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.324 [2024-04-27 00:10:46.296779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.306609] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.306660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.306672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.306677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.306682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.306693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.316764] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.316825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.316842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.316848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.316852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.316863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.326786] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.326843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.326854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.326860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.326864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.326875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.336688] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.336742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.336756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.336761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.336766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.336776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.346843] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.346895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.346906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.346911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.346916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.346926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.356872] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.356933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.356944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.356949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.356954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.356964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.366884] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.366933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.366944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.366950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.366955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.366965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.376917] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.376967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.376978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.376983] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.376987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.377000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.386942] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.387001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.387012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.387017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.387021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.387032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.396981] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.397045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.397057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.397062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.397067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.397080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.407001] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.407050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.407062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.407067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.407072] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.407083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.417058] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.325 [2024-04-27 00:10:46.417117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.325 [2024-04-27 00:10:46.417127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.325 [2024-04-27 00:10:46.417132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.325 [2024-04-27 00:10:46.417137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.325 [2024-04-27 00:10:46.417148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.325 qpair failed and we were unable to recover it. 00:26:16.325 [2024-04-27 00:10:46.427066] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.427119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.427133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.427139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.427143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.427153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.326 [2024-04-27 00:10:46.437105] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.437166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.437177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.437182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.437187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.437197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.326 [2024-04-27 00:10:46.447133] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.447184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.447195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.447201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.447205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.447215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.326 [2024-04-27 00:10:46.457029] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.457088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.457099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.457104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.457109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.457119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.326 [2024-04-27 00:10:46.467270] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.467334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.467346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.467351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.467360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.467371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.326 [2024-04-27 00:10:46.477254] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.477312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.477323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.477328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.477332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.477342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.326 [2024-04-27 00:10:46.487277] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.487330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.487341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.487345] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.487351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.487361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.326 [2024-04-27 00:10:46.497309] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.497407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.497418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.497423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.497427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.497437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.326 [2024-04-27 00:10:46.507168] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.507275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.507287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.507292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.507296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.507307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.326 [2024-04-27 00:10:46.517360] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.517416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.517428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.517433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.517438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.517448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.326 [2024-04-27 00:10:46.527214] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.527269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.527280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.527285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.527290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.527300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.326 [2024-04-27 00:10:46.537250] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.326 [2024-04-27 00:10:46.537304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.326 [2024-04-27 00:10:46.537316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.326 [2024-04-27 00:10:46.537321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.326 [2024-04-27 00:10:46.537326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.326 [2024-04-27 00:10:46.537336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.326 qpair failed and we were unable to recover it. 00:26:16.587 [2024-04-27 00:10:46.547384] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.587 [2024-04-27 00:10:46.547435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.587 [2024-04-27 00:10:46.547446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.587 [2024-04-27 00:10:46.547452] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.587 [2024-04-27 00:10:46.547456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.587 [2024-04-27 00:10:46.547466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.587 qpair failed and we were unable to recover it. 00:26:16.587 [2024-04-27 00:10:46.557421] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.587 [2024-04-27 00:10:46.557475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.587 [2024-04-27 00:10:46.557486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.587 [2024-04-27 00:10:46.557492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.587 [2024-04-27 00:10:46.557499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.587 [2024-04-27 00:10:46.557509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.567442] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.567495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.567506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.567512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.567516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.588 [2024-04-27 00:10:46.567526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.577422] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.577480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.577491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.577496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.577500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.588 [2024-04-27 00:10:46.577511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.587469] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.587561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.587572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.587578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.587583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.588 [2024-04-27 00:10:46.587593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.597540] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.597639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.597659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.597665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.597671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.588 [2024-04-27 00:10:46.597684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.607586] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.607642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.607661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.607667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.607672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.588 [2024-04-27 00:10:46.607686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.617604] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.617661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.617680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.617686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.617692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.588 [2024-04-27 00:10:46.617705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.627633] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.627688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.627701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.627707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.627711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.588 [2024-04-27 00:10:46.627723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.637664] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.637751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.637763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.637768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.637774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.588 [2024-04-27 00:10:46.637785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.647551] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.647604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.647616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.647624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.647629] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.588 [2024-04-27 00:10:46.647640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.657670] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.657727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.657739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.657744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.657749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.588 [2024-04-27 00:10:46.657760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.667728] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.667785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.667796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.667801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.667806] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.588 [2024-04-27 00:10:46.667817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.588 qpair failed and we were unable to recover it. 00:26:16.588 [2024-04-27 00:10:46.677810] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.588 [2024-04-27 00:10:46.677893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.588 [2024-04-27 00:10:46.677905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.588 [2024-04-27 00:10:46.677910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.588 [2024-04-27 00:10:46.677914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.677925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.687665] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.687726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.687737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.687742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.687747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.687757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.697825] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.697902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.697913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.697918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.697922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.697933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.707813] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.707868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.707880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.707885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.707889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.707900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.717868] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.717920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.717931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.717937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.717941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.717951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.727773] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.727832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.727846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.727852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.727856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.727866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.737888] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.737953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.737968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.737973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.737977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.737988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.747942] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.747994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.748005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.748010] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.748014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.748025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.757979] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.758037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.758047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.758052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.758057] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.758067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.768016] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.768065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.768076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.768082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.768086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.768096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.777872] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.777926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.777937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.777942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.777947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.777960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.788076] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.788130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.788141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.788146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.788151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.788161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.589 [2024-04-27 00:10:46.798111] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.589 [2024-04-27 00:10:46.798168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.589 [2024-04-27 00:10:46.798179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.589 [2024-04-27 00:10:46.798184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.589 [2024-04-27 00:10:46.798189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.589 [2024-04-27 00:10:46.798199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.589 qpair failed and we were unable to recover it. 00:26:16.850 [2024-04-27 00:10:46.808096] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.850 [2024-04-27 00:10:46.808158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.850 [2024-04-27 00:10:46.808169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.850 [2024-04-27 00:10:46.808174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.850 [2024-04-27 00:10:46.808179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.850 [2024-04-27 00:10:46.808189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.850 qpair failed and we were unable to recover it. 00:26:16.850 [2024-04-27 00:10:46.818164] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.850 [2024-04-27 00:10:46.818219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.850 [2024-04-27 00:10:46.818231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.850 [2024-04-27 00:10:46.818236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.850 [2024-04-27 00:10:46.818240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.850 [2024-04-27 00:10:46.818250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.850 qpair failed and we were unable to recover it. 00:26:16.850 [2024-04-27 00:10:46.828192] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.850 [2024-04-27 00:10:46.828265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.850 [2024-04-27 00:10:46.828279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.850 [2024-04-27 00:10:46.828284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.850 [2024-04-27 00:10:46.828289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.828299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.838185] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.838244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.838255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.838260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.838265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.838275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.848225] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.848295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.848306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.848311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.848315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.848325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.858233] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.858276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.858287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.858293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.858297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.858308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.868170] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.868236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.868246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.868252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.868256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.868269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.878327] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.878397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.878408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.878413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.878417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.878427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.888339] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.888390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.888401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.888406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.888411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.888421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.898342] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.898387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.898397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.898403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.898407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.898417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.908296] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.908351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.908362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.908367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.908371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.908381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.918412] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.918509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.918520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.918525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.918529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.918539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.928462] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.928519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.928530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.928535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.928539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.928549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.938424] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.938471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.938482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.938487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.938492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.938502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.948506] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.851 [2024-04-27 00:10:46.948561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.851 [2024-04-27 00:10:46.948572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.851 [2024-04-27 00:10:46.948577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.851 [2024-04-27 00:10:46.948581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.851 [2024-04-27 00:10:46.948591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.851 qpair failed and we were unable to recover it. 00:26:16.851 [2024-04-27 00:10:46.958540] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:46.958602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:46.958621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:46.958627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:46.958635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:46.958649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:16.852 [2024-04-27 00:10:46.968567] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:46.968617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:46.968636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:46.968643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:46.968647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:46.968661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:16.852 [2024-04-27 00:10:46.978541] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:46.978595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:46.978614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:46.978620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:46.978626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:46.978640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:16.852 [2024-04-27 00:10:46.988566] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:46.988614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:46.988627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:46.988632] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:46.988637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:46.988648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:16.852 [2024-04-27 00:10:46.998638] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:46.998693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:46.998705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:46.998710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:46.998715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:46.998725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:16.852 [2024-04-27 00:10:47.008633] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:47.008681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:47.008693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:47.008698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:47.008702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:47.008712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:16.852 [2024-04-27 00:10:47.018633] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:47.018681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:47.018692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:47.018697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:47.018702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:47.018712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:16.852 [2024-04-27 00:10:47.028688] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:47.028738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:47.028751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:47.028757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:47.028761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:47.028773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:16.852 [2024-04-27 00:10:47.038736] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:47.038791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:47.038803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:47.038808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:47.038812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:47.038823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:16.852 [2024-04-27 00:10:47.048821] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:47.048873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:47.048885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:47.048893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:47.048898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:47.048908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:16.852 [2024-04-27 00:10:47.058665] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:47.058709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:47.058721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:47.058725] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:47.058730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:47.058740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:16.852 [2024-04-27 00:10:47.068779] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.852 [2024-04-27 00:10:47.068831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.852 [2024-04-27 00:10:47.068845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.852 [2024-04-27 00:10:47.068850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.852 [2024-04-27 00:10:47.068854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:16.852 [2024-04-27 00:10:47.068865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:16.852 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.078854] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.078910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.078921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.078927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.078931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.078941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.114 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.088810] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.088873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.088884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.088889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.088894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.088904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.114 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.098855] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.098896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.098907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.098912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.098917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.098927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.114 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.108897] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.108944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.108955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.108960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.108964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.108975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.114 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.118938] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.118989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.119000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.119005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.119010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.119020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.114 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.128932] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.128990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.129000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.129005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.129010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.129020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.114 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.138945] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.138992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.139004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.139011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.139016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.139026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.114 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.148987] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.149096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.149107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.149113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.149117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.149127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.114 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.159091] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.159144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.159154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.159159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.159164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.159173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.114 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.169052] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.169102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.169113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.169118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.169122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.169132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.114 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.178963] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.179054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.179065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.179070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.179074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.179084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.114 qpair failed and we were unable to recover it. 00:26:17.114 [2024-04-27 00:10:47.189117] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.114 [2024-04-27 00:10:47.189164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.114 [2024-04-27 00:10:47.189175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.114 [2024-04-27 00:10:47.189180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.114 [2024-04-27 00:10:47.189184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.114 [2024-04-27 00:10:47.189194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.199076] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.199130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.199141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.199147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.199151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.199161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.209060] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.209108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.209118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.209123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.209128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.209138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.219068] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.219165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.219177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.219183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.219187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.219198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.229187] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.229233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.229246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.229252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.229256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.229266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.239294] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.239347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.239358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.239363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.239368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.239378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.249282] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.249330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.249341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.249346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.249350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.249360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.259321] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.259418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.259429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.259434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.259438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.259448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.269376] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.269453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.269464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.269469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.269473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.269486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.279408] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.279458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.279469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.279474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.279479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.279489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.289259] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.289305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.289316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.289321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.289325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.289335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.299431] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.299473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.299484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.299489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.299493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.299503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.309320] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.309363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.309374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.115 [2024-04-27 00:10:47.309379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.115 [2024-04-27 00:10:47.309383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.115 [2024-04-27 00:10:47.309393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.115 qpair failed and we were unable to recover it. 00:26:17.115 [2024-04-27 00:10:47.319510] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.115 [2024-04-27 00:10:47.319564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.115 [2024-04-27 00:10:47.319578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.116 [2024-04-27 00:10:47.319583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.116 [2024-04-27 00:10:47.319587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.116 [2024-04-27 00:10:47.319597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.116 qpair failed and we were unable to recover it. 00:26:17.116 [2024-04-27 00:10:47.329526] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.116 [2024-04-27 00:10:47.329571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.116 [2024-04-27 00:10:47.329582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.116 [2024-04-27 00:10:47.329587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.116 [2024-04-27 00:10:47.329592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.116 [2024-04-27 00:10:47.329602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.116 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.339533] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.339577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.339589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.339594] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.339598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.339609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.349556] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.349603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.349615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.349620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.349625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.349635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.359593] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.359663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.359675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.359680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.359688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.359699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.369603] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.369649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.369660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.369665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.369670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.369680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.379647] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.379691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.379702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.379707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.379711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.379721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.389662] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.389707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.389719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.389724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.389728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.389738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.399742] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.399826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.399840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.399846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.399850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.399861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.409578] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.409636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.409647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.409652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.409656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.409666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.419738] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.419781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.419792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.419798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.419802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.419812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.429757] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.429802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.429813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.429818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.429822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.429832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.439850] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.439900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.439911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.439916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.439921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.439931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.449836] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.449926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.449937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.449946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.449950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.449960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.459742] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.459808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.459819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.459824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.459829] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.459841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.469763] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.469810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.469821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.377 [2024-04-27 00:10:47.469826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.377 [2024-04-27 00:10:47.469831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.377 [2024-04-27 00:10:47.469844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.377 qpair failed and we were unable to recover it. 00:26:17.377 [2024-04-27 00:10:47.479944] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.377 [2024-04-27 00:10:47.479999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.377 [2024-04-27 00:10:47.480010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.480015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.480019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.480029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.378 [2024-04-27 00:10:47.489954] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.378 [2024-04-27 00:10:47.490000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.378 [2024-04-27 00:10:47.490011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.490016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.490020] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.490030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.378 [2024-04-27 00:10:47.499983] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.378 [2024-04-27 00:10:47.500031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.378 [2024-04-27 00:10:47.500042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.500048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.500052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.500062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.378 [2024-04-27 00:10:47.510001] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.378 [2024-04-27 00:10:47.510046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.378 [2024-04-27 00:10:47.510057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.510062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.510067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.510077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.378 [2024-04-27 00:10:47.520041] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.378 [2024-04-27 00:10:47.520095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.378 [2024-04-27 00:10:47.520107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.520112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.520116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.520126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.378 [2024-04-27 00:10:47.530047] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.378 [2024-04-27 00:10:47.530094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.378 [2024-04-27 00:10:47.530105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.530110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.530114] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.530124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.378 [2024-04-27 00:10:47.540079] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.378 [2024-04-27 00:10:47.540123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.378 [2024-04-27 00:10:47.540134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.540142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.540146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.540157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.378 [2024-04-27 00:10:47.550078] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.378 [2024-04-27 00:10:47.550125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.378 [2024-04-27 00:10:47.550135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.550140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.550145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.550155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.378 [2024-04-27 00:10:47.560168] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.378 [2024-04-27 00:10:47.560219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.378 [2024-04-27 00:10:47.560230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.560235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.560239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.560249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.378 [2024-04-27 00:10:47.570146] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.378 [2024-04-27 00:10:47.570190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.378 [2024-04-27 00:10:47.570201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.570206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.570210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.570220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.378 [2024-04-27 00:10:47.580174] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.378 [2024-04-27 00:10:47.580217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.378 [2024-04-27 00:10:47.580228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.580233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.580238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.580248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.378 [2024-04-27 00:10:47.590186] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.378 [2024-04-27 00:10:47.590280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.378 [2024-04-27 00:10:47.590292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.378 [2024-04-27 00:10:47.590297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.378 [2024-04-27 00:10:47.590301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.378 [2024-04-27 00:10:47.590311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.378 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.600219] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.600278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.600289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.600295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.600299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.600310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.610250] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.610295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.610306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.610311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.610316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.610326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.620254] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.620297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.620308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.620313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.620318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.620328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.630310] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.630358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.630372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.630377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.630382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.630392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.640360] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.640452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.640464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.640469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.640474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.640484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.650227] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.650274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.650285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.650291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.650295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.650306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.660394] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.660438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.660449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.660454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.660459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.660469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.670415] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.670473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.670484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.670489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.670494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.670506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.680446] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.680497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.680508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.680513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.680518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.680527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.690465] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.690509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.690520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.690525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.690530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.690540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.700519] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.700566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.700577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.700582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.700587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.700597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.710529] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.710574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.710585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.710590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.710594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.710604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.720563] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.720614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.720628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.720634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.720638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.720648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.730569] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.730659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.730679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.730686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.730691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.730704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.740477] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.740521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.740534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.740540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.740545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.740556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.750642] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.750690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.750701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.750706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.750711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.750721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.760522] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.760568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.760579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.640 [2024-04-27 00:10:47.760584] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.640 [2024-04-27 00:10:47.760592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.640 [2024-04-27 00:10:47.760603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-27 00:10:47.770658] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.640 [2024-04-27 00:10:47.770743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.640 [2024-04-27 00:10:47.770754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.641 [2024-04-27 00:10:47.770760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.641 [2024-04-27 00:10:47.770764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.641 [2024-04-27 00:10:47.770775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-27 00:10:47.780712] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.641 [2024-04-27 00:10:47.780755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.641 [2024-04-27 00:10:47.780766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.641 [2024-04-27 00:10:47.780771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.641 [2024-04-27 00:10:47.780776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.641 [2024-04-27 00:10:47.780786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-27 00:10:47.790744] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.641 [2024-04-27 00:10:47.790790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.641 [2024-04-27 00:10:47.790802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.641 [2024-04-27 00:10:47.790807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.641 [2024-04-27 00:10:47.790811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.641 [2024-04-27 00:10:47.790821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-27 00:10:47.800777] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.641 [2024-04-27 00:10:47.800823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.641 [2024-04-27 00:10:47.800835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.641 [2024-04-27 00:10:47.800844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.641 [2024-04-27 00:10:47.800848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.641 [2024-04-27 00:10:47.800858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-27 00:10:47.810797] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.641 [2024-04-27 00:10:47.810846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.641 [2024-04-27 00:10:47.810857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.641 [2024-04-27 00:10:47.810862] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.641 [2024-04-27 00:10:47.810867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.641 [2024-04-27 00:10:47.810877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-27 00:10:47.820832] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.641 [2024-04-27 00:10:47.820881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.641 [2024-04-27 00:10:47.820894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.641 [2024-04-27 00:10:47.820900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.641 [2024-04-27 00:10:47.820904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.641 [2024-04-27 00:10:47.820915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-27 00:10:47.830740] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.641 [2024-04-27 00:10:47.830790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.641 [2024-04-27 00:10:47.830802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.641 [2024-04-27 00:10:47.830807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.641 [2024-04-27 00:10:47.830812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.641 [2024-04-27 00:10:47.830822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-27 00:10:47.840894] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.641 [2024-04-27 00:10:47.840989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.641 [2024-04-27 00:10:47.841001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.641 [2024-04-27 00:10:47.841006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.641 [2024-04-27 00:10:47.841011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.641 [2024-04-27 00:10:47.841021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-27 00:10:47.850916] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.641 [2024-04-27 00:10:47.850959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.641 [2024-04-27 00:10:47.850971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.641 [2024-04-27 00:10:47.850976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.641 [2024-04-27 00:10:47.850983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.641 [2024-04-27 00:10:47.850993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.902 [2024-04-27 00:10:47.860944] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.902 [2024-04-27 00:10:47.861023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.902 [2024-04-27 00:10:47.861034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.902 [2024-04-27 00:10:47.861040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.861045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.861055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.870944] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.870990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.871001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.871006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.871010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.871020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.880970] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.881025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.881036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.881041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.881045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.881056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.891088] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.891147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.891159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.891164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.891168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.891178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.901082] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.901144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.901156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.901161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.901165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.901175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.911078] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.911126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.911137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.911142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.911146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.911156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.921094] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.921152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.921163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.921168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.921173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.921183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.931139] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.931181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.931192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.931197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.931202] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.931212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.941120] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.941165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.941176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.941187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.941192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.941201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.951051] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.951097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.951108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.951114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.951120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.951130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.961178] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.961231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.961242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.961248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.961252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.961262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.971233] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.971306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.971317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.971322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.971327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.971337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.981245] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.981288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.981299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.981304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.981309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.981319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:47.991364] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:47.991412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:47.991423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:47.991430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:47.991436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:47.991446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:48.001318] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:48.001369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:48.001380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:48.001386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:48.001390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:48.001401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:48.011328] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:48.011376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:48.011387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:48.011392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:48.011397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:48.011406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:48.021232] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:48.021278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:48.021289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:48.021294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:48.021299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:48.021310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:48.031407] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:48.031452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:48.031466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:48.031472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:48.031477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:48.031487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:48.041288] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:48.041337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:48.041349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:48.041354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:48.041359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:48.041369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:48.051462] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:48.051514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:48.051525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:48.051531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:48.051535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:48.051546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:48.061329] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:48.061378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:48.061389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:48.061394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.903 [2024-04-27 00:10:48.061399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.903 [2024-04-27 00:10:48.061409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.903 qpair failed and we were unable to recover it. 00:26:17.903 [2024-04-27 00:10:48.071493] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.903 [2024-04-27 00:10:48.071542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.903 [2024-04-27 00:10:48.071555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.903 [2024-04-27 00:10:48.071560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.904 [2024-04-27 00:10:48.071565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.904 [2024-04-27 00:10:48.071581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-27 00:10:48.081525] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.904 [2024-04-27 00:10:48.081575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.904 [2024-04-27 00:10:48.081587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.904 [2024-04-27 00:10:48.081592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.904 [2024-04-27 00:10:48.081597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.904 [2024-04-27 00:10:48.081607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-27 00:10:48.091559] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.904 [2024-04-27 00:10:48.091603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.904 [2024-04-27 00:10:48.091614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.904 [2024-04-27 00:10:48.091620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.904 [2024-04-27 00:10:48.091625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.904 [2024-04-27 00:10:48.091635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-27 00:10:48.101585] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.904 [2024-04-27 00:10:48.101630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.904 [2024-04-27 00:10:48.101641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.904 [2024-04-27 00:10:48.101646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.904 [2024-04-27 00:10:48.101651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.904 [2024-04-27 00:10:48.101661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.904 qpair failed and we were unable to recover it. 00:26:17.904 [2024-04-27 00:10:48.111593] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.904 [2024-04-27 00:10:48.111638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.904 [2024-04-27 00:10:48.111649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.904 [2024-04-27 00:10:48.111654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.904 [2024-04-27 00:10:48.111659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:17.904 [2024-04-27 00:10:48.111669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:17.904 qpair failed and we were unable to recover it. 00:26:18.165 [2024-04-27 00:10:48.121621] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.165 [2024-04-27 00:10:48.121672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.165 [2024-04-27 00:10:48.121686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.165 [2024-04-27 00:10:48.121692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.165 [2024-04-27 00:10:48.121696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.165 [2024-04-27 00:10:48.121707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.165 qpair failed and we were unable to recover it. 00:26:18.165 [2024-04-27 00:10:48.131660] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.165 [2024-04-27 00:10:48.131707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.165 [2024-04-27 00:10:48.131718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.165 [2024-04-27 00:10:48.131723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.165 [2024-04-27 00:10:48.131727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.165 [2024-04-27 00:10:48.131738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.165 qpair failed and we were unable to recover it. 00:26:18.165 [2024-04-27 00:10:48.141678] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.165 [2024-04-27 00:10:48.141724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.165 [2024-04-27 00:10:48.141736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.165 [2024-04-27 00:10:48.141741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.165 [2024-04-27 00:10:48.141745] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.165 [2024-04-27 00:10:48.141756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.165 qpair failed and we were unable to recover it. 00:26:18.165 [2024-04-27 00:10:48.151694] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.165 [2024-04-27 00:10:48.151782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.165 [2024-04-27 00:10:48.151794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.165 [2024-04-27 00:10:48.151799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.165 [2024-04-27 00:10:48.151804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.165 [2024-04-27 00:10:48.151814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.165 qpair failed and we were unable to recover it. 00:26:18.165 [2024-04-27 00:10:48.161741] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.165 [2024-04-27 00:10:48.161789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.165 [2024-04-27 00:10:48.161800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.165 [2024-04-27 00:10:48.161805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.165 [2024-04-27 00:10:48.161812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.165 [2024-04-27 00:10:48.161822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.165 qpair failed and we were unable to recover it. 00:26:18.165 [2024-04-27 00:10:48.171635] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.165 [2024-04-27 00:10:48.171680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.165 [2024-04-27 00:10:48.171692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.165 [2024-04-27 00:10:48.171697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.165 [2024-04-27 00:10:48.171701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.171712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.181800] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.181848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.181860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.181865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.181869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.181880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.191692] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.191738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.191749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.191754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.191759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.191769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.201845] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.201896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.201907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.201912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.201917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.201927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.211871] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.211919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.211930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.211935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.211939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.211950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.221908] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.221951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.221963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.221968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.221973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.221983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.231935] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.232017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.232028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.232033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.232038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.232049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.241839] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.241894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.241906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.241911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.241916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.241927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.251858] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.251905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.251916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.251921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.251928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.251939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.261982] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.262031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.262042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.262048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.262052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.262063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.272034] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.272080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.272091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.272096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.272100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.272111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.281945] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.281995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.282010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.282017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.282022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.282034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.292104] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.292158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.166 [2024-04-27 00:10:48.292169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.166 [2024-04-27 00:10:48.292175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.166 [2024-04-27 00:10:48.292179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.166 [2024-04-27 00:10:48.292190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.166 qpair failed and we were unable to recover it. 00:26:18.166 [2024-04-27 00:10:48.302118] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.166 [2024-04-27 00:10:48.302209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.167 [2024-04-27 00:10:48.302221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.167 [2024-04-27 00:10:48.302226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.167 [2024-04-27 00:10:48.302230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.167 [2024-04-27 00:10:48.302241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.167 qpair failed and we were unable to recover it. 00:26:18.167 [2024-04-27 00:10:48.312153] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.167 [2024-04-27 00:10:48.312197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.167 [2024-04-27 00:10:48.312208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.167 [2024-04-27 00:10:48.312213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.167 [2024-04-27 00:10:48.312218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.167 [2024-04-27 00:10:48.312228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.167 qpair failed and we were unable to recover it. 00:26:18.167 [2024-04-27 00:10:48.322170] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.167 [2024-04-27 00:10:48.322222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.167 [2024-04-27 00:10:48.322233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.167 [2024-04-27 00:10:48.322239] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.167 [2024-04-27 00:10:48.322243] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.167 [2024-04-27 00:10:48.322254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.167 qpair failed and we were unable to recover it. 00:26:18.167 [2024-04-27 00:10:48.332195] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.167 [2024-04-27 00:10:48.332247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.167 [2024-04-27 00:10:48.332258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.167 [2024-04-27 00:10:48.332263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.167 [2024-04-27 00:10:48.332268] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.167 [2024-04-27 00:10:48.332279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.167 qpair failed and we were unable to recover it. 00:26:18.167 [2024-04-27 00:10:48.342207] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.167 [2024-04-27 00:10:48.342254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.167 [2024-04-27 00:10:48.342266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.167 [2024-04-27 00:10:48.342273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.167 [2024-04-27 00:10:48.342278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.167 [2024-04-27 00:10:48.342288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.167 qpair failed and we were unable to recover it. 00:26:18.167 [2024-04-27 00:10:48.352121] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.167 [2024-04-27 00:10:48.352167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.167 [2024-04-27 00:10:48.352178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.167 [2024-04-27 00:10:48.352183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.167 [2024-04-27 00:10:48.352188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.167 [2024-04-27 00:10:48.352198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.167 qpair failed and we were unable to recover it. 00:26:18.167 [2024-04-27 00:10:48.362281] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.167 [2024-04-27 00:10:48.362333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.167 [2024-04-27 00:10:48.362347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.167 [2024-04-27 00:10:48.362352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.167 [2024-04-27 00:10:48.362357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.167 [2024-04-27 00:10:48.362369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.167 qpair failed and we were unable to recover it. 00:26:18.167 [2024-04-27 00:10:48.372297] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.167 [2024-04-27 00:10:48.372340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.167 [2024-04-27 00:10:48.372352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.167 [2024-04-27 00:10:48.372357] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.167 [2024-04-27 00:10:48.372361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.167 [2024-04-27 00:10:48.372372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.167 qpair failed and we were unable to recover it. 00:26:18.167 [2024-04-27 00:10:48.382218] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.167 [2024-04-27 00:10:48.382285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.167 [2024-04-27 00:10:48.382296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.167 [2024-04-27 00:10:48.382301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.167 [2024-04-27 00:10:48.382306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.167 [2024-04-27 00:10:48.382316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.167 qpair failed and we were unable to recover it. 00:26:18.428 [2024-04-27 00:10:48.392212] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.392258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.392269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.392275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.392279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.392290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.402238] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.402290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.402301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.402306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.402311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.402321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.412386] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.412427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.412439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.412444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.412448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.412458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.422426] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.422468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.422479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.422484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.422488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.422498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.432450] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.432509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.432523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.432528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.432532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.432543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.442488] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.442536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.442548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.442553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.442557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.442567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.452504] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.452555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.452566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.452571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.452576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.452586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.462522] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.462568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.462579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.462584] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.462588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.462599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.472556] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.472604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.472615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.472620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.472624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.472637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.482455] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.482504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.482515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.482520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.482524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.482535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.492483] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.492527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.492537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.492542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.492547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.492557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.502599] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.502656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.502667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.429 [2024-04-27 00:10:48.502672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.429 [2024-04-27 00:10:48.502676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.429 [2024-04-27 00:10:48.502687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.429 qpair failed and we were unable to recover it. 00:26:18.429 [2024-04-27 00:10:48.512547] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.429 [2024-04-27 00:10:48.512594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.429 [2024-04-27 00:10:48.512606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.512612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.512616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.512627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.522693] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.522744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.430 [2024-04-27 00:10:48.522758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.522764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.522768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.522779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.532741] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.532784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.430 [2024-04-27 00:10:48.532795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.532800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.532805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.532815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.542755] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.542802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.430 [2024-04-27 00:10:48.542813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.542818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.542823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.542833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.552782] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.552829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.430 [2024-04-27 00:10:48.552844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.552850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.552854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.552865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.562819] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.562871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.430 [2024-04-27 00:10:48.562882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.562887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.562892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.562905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.572818] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.572919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.430 [2024-04-27 00:10:48.572930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.572936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.572941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.572951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.582727] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.582775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.430 [2024-04-27 00:10:48.582785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.582791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.582795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.582805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.592890] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.592937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.430 [2024-04-27 00:10:48.592948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.592953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.592958] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.592968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.602927] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.602978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.430 [2024-04-27 00:10:48.602989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.602994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.602999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.603010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.612985] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.613035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.430 [2024-04-27 00:10:48.613046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.613051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.613055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.613066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.622980] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.623082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.430 [2024-04-27 00:10:48.623094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.430 [2024-04-27 00:10:48.623099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.430 [2024-04-27 00:10:48.623104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.430 [2024-04-27 00:10:48.623114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.430 qpair failed and we were unable to recover it. 00:26:18.430 [2024-04-27 00:10:48.632997] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.430 [2024-04-27 00:10:48.633046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.431 [2024-04-27 00:10:48.633057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.431 [2024-04-27 00:10:48.633063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.431 [2024-04-27 00:10:48.633067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.431 [2024-04-27 00:10:48.633077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.431 qpair failed and we were unable to recover it. 00:26:18.431 [2024-04-27 00:10:48.643037] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.431 [2024-04-27 00:10:48.643084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.431 [2024-04-27 00:10:48.643095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.431 [2024-04-27 00:10:48.643100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.431 [2024-04-27 00:10:48.643105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.431 [2024-04-27 00:10:48.643115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.431 qpair failed and we were unable to recover it. 00:26:18.692 [2024-04-27 00:10:48.653044] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.692 [2024-04-27 00:10:48.653085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.692 [2024-04-27 00:10:48.653096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.692 [2024-04-27 00:10:48.653102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.692 [2024-04-27 00:10:48.653109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.692 [2024-04-27 00:10:48.653120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-04-27 00:10:48.663078] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.692 [2024-04-27 00:10:48.663126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.692 [2024-04-27 00:10:48.663137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.692 [2024-04-27 00:10:48.663142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.692 [2024-04-27 00:10:48.663147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.692 [2024-04-27 00:10:48.663157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-04-27 00:10:48.673131] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.692 [2024-04-27 00:10:48.673178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.692 [2024-04-27 00:10:48.673189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.692 [2024-04-27 00:10:48.673194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.692 [2024-04-27 00:10:48.673198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.692 [2024-04-27 00:10:48.673208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-04-27 00:10:48.683176] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.692 [2024-04-27 00:10:48.683229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.692 [2024-04-27 00:10:48.683241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.692 [2024-04-27 00:10:48.683247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.692 [2024-04-27 00:10:48.683252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.692 [2024-04-27 00:10:48.683263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-04-27 00:10:48.693173] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.692 [2024-04-27 00:10:48.693223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.692 [2024-04-27 00:10:48.693234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.692 [2024-04-27 00:10:48.693239] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.692 [2024-04-27 00:10:48.693244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.692 [2024-04-27 00:10:48.693255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-04-27 00:10:48.703058] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.692 [2024-04-27 00:10:48.703106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.692 [2024-04-27 00:10:48.703117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.692 [2024-04-27 00:10:48.703122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.692 [2024-04-27 00:10:48.703126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.692 [2024-04-27 00:10:48.703137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-04-27 00:10:48.713235] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.692 [2024-04-27 00:10:48.713280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.692 [2024-04-27 00:10:48.713292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.692 [2024-04-27 00:10:48.713297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.692 [2024-04-27 00:10:48.713301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.692 [2024-04-27 00:10:48.713311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-04-27 00:10:48.723247] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.692 [2024-04-27 00:10:48.723297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.692 [2024-04-27 00:10:48.723308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.692 [2024-04-27 00:10:48.723313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.692 [2024-04-27 00:10:48.723317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.692 [2024-04-27 00:10:48.723328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-04-27 00:10:48.733280] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.692 [2024-04-27 00:10:48.733322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.692 [2024-04-27 00:10:48.733332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.692 [2024-04-27 00:10:48.733337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.692 [2024-04-27 00:10:48.733342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.733352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.743172] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.743235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.743246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.743257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.743262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.743272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.753335] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.753381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.753393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.753398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.753403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.753413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.763239] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.763288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.763301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.763306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.763310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.763321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.773369] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.773414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.773425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.773431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.773435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.773445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.783433] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.783477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.783488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.783493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.783498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.783508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.793440] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.793485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.793496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.793501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.793505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.793515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.803471] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.803564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.803576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.803581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.803585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.803595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.813496] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.813548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.813568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.813574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.813579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.813593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.823538] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.823590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.823602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.823608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.823612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.823623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.833551] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.833596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.833608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.833616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.833621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.833632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.843457] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.843515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.843527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.843533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.843537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.843548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.853612] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.693 [2024-04-27 00:10:48.853655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.693 [2024-04-27 00:10:48.853666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.693 [2024-04-27 00:10:48.853671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.693 [2024-04-27 00:10:48.853676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.693 [2024-04-27 00:10:48.853686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-04-27 00:10:48.863656] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.694 [2024-04-27 00:10:48.863703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.694 [2024-04-27 00:10:48.863716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.694 [2024-04-27 00:10:48.863721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.694 [2024-04-27 00:10:48.863726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.694 [2024-04-27 00:10:48.863736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-04-27 00:10:48.873671] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.694 [2024-04-27 00:10:48.873720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.694 [2024-04-27 00:10:48.873731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.694 [2024-04-27 00:10:48.873736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.694 [2024-04-27 00:10:48.873741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.694 [2024-04-27 00:10:48.873751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-04-27 00:10:48.883572] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.694 [2024-04-27 00:10:48.883618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.694 [2024-04-27 00:10:48.883630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.694 [2024-04-27 00:10:48.883635] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.694 [2024-04-27 00:10:48.883640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.694 [2024-04-27 00:10:48.883650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-04-27 00:10:48.893690] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.694 [2024-04-27 00:10:48.893737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.694 [2024-04-27 00:10:48.893748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.694 [2024-04-27 00:10:48.893753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.694 [2024-04-27 00:10:48.893758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.694 [2024-04-27 00:10:48.893768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-04-27 00:10:48.903735] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.694 [2024-04-27 00:10:48.903807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.694 [2024-04-27 00:10:48.903818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.694 [2024-04-27 00:10:48.903823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.694 [2024-04-27 00:10:48.903828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.694 [2024-04-27 00:10:48.903840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.956 [2024-04-27 00:10:48.913822] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.956 [2024-04-27 00:10:48.913871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.956 [2024-04-27 00:10:48.913882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.956 [2024-04-27 00:10:48.913887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.956 [2024-04-27 00:10:48.913892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.956 [2024-04-27 00:10:48.913902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.956 qpair failed and we were unable to recover it. 00:26:18.956 [2024-04-27 00:10:48.923689] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.956 [2024-04-27 00:10:48.923745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.956 [2024-04-27 00:10:48.923763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.956 [2024-04-27 00:10:48.923768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.956 [2024-04-27 00:10:48.923773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.956 [2024-04-27 00:10:48.923783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.956 qpair failed and we were unable to recover it. 00:26:18.956 [2024-04-27 00:10:48.933720] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.956 [2024-04-27 00:10:48.933764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.956 [2024-04-27 00:10:48.933776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.956 [2024-04-27 00:10:48.933781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.956 [2024-04-27 00:10:48.933786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.956 [2024-04-27 00:10:48.933797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.956 qpair failed and we were unable to recover it. 00:26:18.956 [2024-04-27 00:10:48.943938] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.956 [2024-04-27 00:10:48.943983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.956 [2024-04-27 00:10:48.943995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.956 [2024-04-27 00:10:48.944000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.956 [2024-04-27 00:10:48.944005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.956 [2024-04-27 00:10:48.944016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.956 qpair failed and we were unable to recover it. 00:26:18.956 [2024-04-27 00:10:48.953755] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.956 [2024-04-27 00:10:48.953802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.956 [2024-04-27 00:10:48.953813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:48.953818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:48.953822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:48.953832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:48.963892] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:48.963941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:48.963952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:48.963957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:48.963961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:48.963975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:48.973920] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:48.973969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:48.973980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:48.973985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:48.973989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:48.973999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:48.983942] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:48.983986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:48.983997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:48.984002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:48.984006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:48.984016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:48.993972] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:48.994018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:48.994029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:48.994034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:48.994039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:48.994049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:49.004021] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:49.004123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:49.004134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:49.004139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:49.004144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:49.004154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:49.014011] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:49.014061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:49.014075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:49.014080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:49.014084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:49.014094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:49.024058] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:49.024109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:49.024120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:49.024126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:49.024131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:49.024140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:49.034059] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:49.034106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:49.034116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:49.034122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:49.034126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:49.034136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:49.044126] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:49.044186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:49.044197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:49.044202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:49.044207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:49.044217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:49.054150] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:49.054226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:49.054237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:49.054242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:49.054250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:49.054260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:49.064139] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:49.064184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:49.064196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:49.064201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:49.064205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:49.064215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:49.074275] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.957 [2024-04-27 00:10:49.074325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.957 [2024-04-27 00:10:49.074336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.957 [2024-04-27 00:10:49.074341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.957 [2024-04-27 00:10:49.074346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.957 [2024-04-27 00:10:49.074356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.957 qpair failed and we were unable to recover it. 00:26:18.957 [2024-04-27 00:10:49.084230] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.958 [2024-04-27 00:10:49.084285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.958 [2024-04-27 00:10:49.084295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.958 [2024-04-27 00:10:49.084300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.958 [2024-04-27 00:10:49.084305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.958 [2024-04-27 00:10:49.084315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.958 qpair failed and we were unable to recover it. 00:26:18.958 [2024-04-27 00:10:49.094197] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.958 [2024-04-27 00:10:49.094241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.958 [2024-04-27 00:10:49.094252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.958 [2024-04-27 00:10:49.094257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.958 [2024-04-27 00:10:49.094262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.958 [2024-04-27 00:10:49.094272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.958 qpair failed and we were unable to recover it. 00:26:18.958 [2024-04-27 00:10:49.104319] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.958 [2024-04-27 00:10:49.104366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.958 [2024-04-27 00:10:49.104378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.958 [2024-04-27 00:10:49.104383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.958 [2024-04-27 00:10:49.104387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.958 [2024-04-27 00:10:49.104397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.958 qpair failed and we were unable to recover it. 00:26:18.958 [2024-04-27 00:10:49.114296] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.958 [2024-04-27 00:10:49.114340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.958 [2024-04-27 00:10:49.114351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.958 [2024-04-27 00:10:49.114356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.958 [2024-04-27 00:10:49.114360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.958 [2024-04-27 00:10:49.114370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.958 qpair failed and we were unable to recover it. 00:26:18.958 [2024-04-27 00:10:49.124339] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.958 [2024-04-27 00:10:49.124386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.958 [2024-04-27 00:10:49.124398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.958 [2024-04-27 00:10:49.124402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.958 [2024-04-27 00:10:49.124407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.958 [2024-04-27 00:10:49.124417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.958 qpair failed and we were unable to recover it. 00:26:18.958 [2024-04-27 00:10:49.134352] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.958 [2024-04-27 00:10:49.134443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.958 [2024-04-27 00:10:49.134454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.958 [2024-04-27 00:10:49.134459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.958 [2024-04-27 00:10:49.134463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.958 [2024-04-27 00:10:49.134473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.958 qpair failed and we were unable to recover it. 00:26:18.958 [2024-04-27 00:10:49.144337] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.958 [2024-04-27 00:10:49.144384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.958 [2024-04-27 00:10:49.144395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.958 [2024-04-27 00:10:49.144402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.958 [2024-04-27 00:10:49.144407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.958 [2024-04-27 00:10:49.144416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.958 qpair failed and we were unable to recover it. 00:26:18.958 [2024-04-27 00:10:49.154416] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.958 [2024-04-27 00:10:49.154463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.958 [2024-04-27 00:10:49.154474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.958 [2024-04-27 00:10:49.154479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.958 [2024-04-27 00:10:49.154484] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.958 [2024-04-27 00:10:49.154494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.958 qpair failed and we were unable to recover it. 00:26:18.958 [2024-04-27 00:10:49.164464] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.958 [2024-04-27 00:10:49.164512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.958 [2024-04-27 00:10:49.164523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.958 [2024-04-27 00:10:49.164528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.958 [2024-04-27 00:10:49.164533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.958 [2024-04-27 00:10:49.164542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.958 qpair failed and we were unable to recover it. 00:26:18.958 [2024-04-27 00:10:49.174453] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.958 [2024-04-27 00:10:49.174499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.958 [2024-04-27 00:10:49.174510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.958 [2024-04-27 00:10:49.174515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.958 [2024-04-27 00:10:49.174519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:18.958 [2024-04-27 00:10:49.174529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.958 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.184492] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.220 [2024-04-27 00:10:49.184537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.220 [2024-04-27 00:10:49.184548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.220 [2024-04-27 00:10:49.184553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.220 [2024-04-27 00:10:49.184557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.220 [2024-04-27 00:10:49.184567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.220 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.194489] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.220 [2024-04-27 00:10:49.194534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.220 [2024-04-27 00:10:49.194545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.220 [2024-04-27 00:10:49.194550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.220 [2024-04-27 00:10:49.194555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.220 [2024-04-27 00:10:49.194565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.220 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.204542] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.220 [2024-04-27 00:10:49.204597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.220 [2024-04-27 00:10:49.204616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.220 [2024-04-27 00:10:49.204622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.220 [2024-04-27 00:10:49.204627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.220 [2024-04-27 00:10:49.204640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.220 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.214439] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.220 [2024-04-27 00:10:49.214489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.220 [2024-04-27 00:10:49.214508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.220 [2024-04-27 00:10:49.214515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.220 [2024-04-27 00:10:49.214520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.220 [2024-04-27 00:10:49.214533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.220 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.224604] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.220 [2024-04-27 00:10:49.224652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.220 [2024-04-27 00:10:49.224672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.220 [2024-04-27 00:10:49.224678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.220 [2024-04-27 00:10:49.224683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.220 [2024-04-27 00:10:49.224696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.220 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.234594] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.220 [2024-04-27 00:10:49.234645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.220 [2024-04-27 00:10:49.234664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.220 [2024-04-27 00:10:49.234674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.220 [2024-04-27 00:10:49.234678] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.220 [2024-04-27 00:10:49.234691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.220 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.244647] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.220 [2024-04-27 00:10:49.244708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.220 [2024-04-27 00:10:49.244721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.220 [2024-04-27 00:10:49.244727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.220 [2024-04-27 00:10:49.244731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.220 [2024-04-27 00:10:49.244743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.220 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.254676] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.220 [2024-04-27 00:10:49.254769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.220 [2024-04-27 00:10:49.254781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.220 [2024-04-27 00:10:49.254786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.220 [2024-04-27 00:10:49.254790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.220 [2024-04-27 00:10:49.254801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.220 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.264700] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.220 [2024-04-27 00:10:49.264741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.220 [2024-04-27 00:10:49.264753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.220 [2024-04-27 00:10:49.264758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.220 [2024-04-27 00:10:49.264762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.220 [2024-04-27 00:10:49.264773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.220 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.274725] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.220 [2024-04-27 00:10:49.274816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.220 [2024-04-27 00:10:49.274827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.220 [2024-04-27 00:10:49.274832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.220 [2024-04-27 00:10:49.274839] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.220 [2024-04-27 00:10:49.274850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.220 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.284745] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.220 [2024-04-27 00:10:49.284797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.220 [2024-04-27 00:10:49.284808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.220 [2024-04-27 00:10:49.284813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.220 [2024-04-27 00:10:49.284817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.220 [2024-04-27 00:10:49.284827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.220 qpair failed and we were unable to recover it. 00:26:19.220 [2024-04-27 00:10:49.294777] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.294851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.294862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.294867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.294871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.221 [2024-04-27 00:10:49.294882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.221 qpair failed and we were unable to recover it. 00:26:19.221 [2024-04-27 00:10:49.304676] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.304722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.304734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.304739] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.304744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.221 [2024-04-27 00:10:49.304754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.221 qpair failed and we were unable to recover it. 00:26:19.221 [2024-04-27 00:10:49.314699] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.314776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.314788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.314793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.314797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.221 [2024-04-27 00:10:49.314807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.221 qpair failed and we were unable to recover it. 00:26:19.221 [2024-04-27 00:10:49.324880] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.324930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.324943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.324948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.324953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.221 [2024-04-27 00:10:49.324963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.221 qpair failed and we were unable to recover it. 00:26:19.221 [2024-04-27 00:10:49.334880] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.334952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.334963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.334968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.334973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.221 [2024-04-27 00:10:49.334983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.221 qpair failed and we were unable to recover it. 00:26:19.221 [2024-04-27 00:10:49.344876] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.344920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.344931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.344936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.344941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.221 [2024-04-27 00:10:49.344951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.221 qpair failed and we were unable to recover it. 00:26:19.221 [2024-04-27 00:10:49.354933] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.354981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.354992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.354997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.355002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.221 [2024-04-27 00:10:49.355012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.221 qpair failed and we were unable to recover it. 00:26:19.221 [2024-04-27 00:10:49.364931] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.364979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.364990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.364995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.365000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.221 [2024-04-27 00:10:49.365013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.221 qpair failed and we were unable to recover it. 00:26:19.221 [2024-04-27 00:10:49.374992] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.375037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.375048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.375053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.375057] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.221 [2024-04-27 00:10:49.375067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.221 qpair failed and we were unable to recover it. 00:26:19.221 [2024-04-27 00:10:49.385009] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.385054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.385065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.385070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.385075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.221 [2024-04-27 00:10:49.385085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.221 qpair failed and we were unable to recover it. 00:26:19.221 [2024-04-27 00:10:49.395046] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.395132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.395143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.395148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.395152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.221 [2024-04-27 00:10:49.395163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.221 qpair failed and we were unable to recover it. 00:26:19.221 [2024-04-27 00:10:49.405044] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.221 [2024-04-27 00:10:49.405091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.221 [2024-04-27 00:10:49.405102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.221 [2024-04-27 00:10:49.405107] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.221 [2024-04-27 00:10:49.405112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.222 [2024-04-27 00:10:49.405122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.222 qpair failed and we were unable to recover it. 00:26:19.222 [2024-04-27 00:10:49.415110] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.222 [2024-04-27 00:10:49.415156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.222 [2024-04-27 00:10:49.415169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.222 [2024-04-27 00:10:49.415174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.222 [2024-04-27 00:10:49.415178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.222 [2024-04-27 00:10:49.415188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.222 qpair failed and we were unable to recover it. 00:26:19.222 [2024-04-27 00:10:49.425132] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.222 [2024-04-27 00:10:49.425178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.222 [2024-04-27 00:10:49.425189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.222 [2024-04-27 00:10:49.425194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.222 [2024-04-27 00:10:49.425198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.222 [2024-04-27 00:10:49.425208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.222 qpair failed and we were unable to recover it. 00:26:19.222 [2024-04-27 00:10:49.435156] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.222 [2024-04-27 00:10:49.435205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.222 [2024-04-27 00:10:49.435215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.222 [2024-04-27 00:10:49.435220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.222 [2024-04-27 00:10:49.435224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.222 [2024-04-27 00:10:49.435234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.222 qpair failed and we were unable to recover it. 00:26:19.483 [2024-04-27 00:10:49.445171] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.483 [2024-04-27 00:10:49.445219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.483 [2024-04-27 00:10:49.445231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.483 [2024-04-27 00:10:49.445236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.483 [2024-04-27 00:10:49.445240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.483 [2024-04-27 00:10:49.445250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.483 qpair failed and we were unable to recover it. 00:26:19.483 [2024-04-27 00:10:49.455185] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.483 [2024-04-27 00:10:49.455239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.483 [2024-04-27 00:10:49.455250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.483 [2024-04-27 00:10:49.455255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.483 [2024-04-27 00:10:49.455266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.483 [2024-04-27 00:10:49.455276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.483 qpair failed and we were unable to recover it. 00:26:19.483 [2024-04-27 00:10:49.465197] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.483 [2024-04-27 00:10:49.465238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.483 [2024-04-27 00:10:49.465249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.483 [2024-04-27 00:10:49.465254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.483 [2024-04-27 00:10:49.465258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.483 [2024-04-27 00:10:49.465268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.483 qpair failed and we were unable to recover it. 00:26:19.483 [2024-04-27 00:10:49.475254] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.483 [2024-04-27 00:10:49.475299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.483 [2024-04-27 00:10:49.475310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.483 [2024-04-27 00:10:49.475315] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.483 [2024-04-27 00:10:49.475320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.483 [2024-04-27 00:10:49.475329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.483 qpair failed and we were unable to recover it. 00:26:19.483 [2024-04-27 00:10:49.485350] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.483 [2024-04-27 00:10:49.485402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.483 [2024-04-27 00:10:49.485413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.483 [2024-04-27 00:10:49.485418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.484 [2024-04-27 00:10:49.485422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.484 [2024-04-27 00:10:49.485432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.484 qpair failed and we were unable to recover it. 00:26:19.484 [2024-04-27 00:10:49.495301] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.484 [2024-04-27 00:10:49.495347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.484 [2024-04-27 00:10:49.495358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.484 [2024-04-27 00:10:49.495363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.484 [2024-04-27 00:10:49.495367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.484 [2024-04-27 00:10:49.495377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.484 qpair failed and we were unable to recover it. 00:26:19.484 [2024-04-27 00:10:49.505208] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.484 [2024-04-27 00:10:49.505267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.484 [2024-04-27 00:10:49.505279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.484 [2024-04-27 00:10:49.505284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.484 [2024-04-27 00:10:49.505289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.484 [2024-04-27 00:10:49.505299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.484 qpair failed and we were unable to recover it. 00:26:19.484 [2024-04-27 00:10:49.515352] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.484 [2024-04-27 00:10:49.515445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.484 [2024-04-27 00:10:49.515456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.484 [2024-04-27 00:10:49.515461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.484 [2024-04-27 00:10:49.515466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.484 [2024-04-27 00:10:49.515476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.484 qpair failed and we were unable to recover it. 00:26:19.484 [2024-04-27 00:10:49.525390] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.484 [2024-04-27 00:10:49.525441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.484 [2024-04-27 00:10:49.525455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.484 [2024-04-27 00:10:49.525460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.484 [2024-04-27 00:10:49.525465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc78000b90 00:26:19.484 [2024-04-27 00:10:49.525476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.484 qpair failed and we were unable to recover it. 00:26:19.484 [2024-04-27 00:10:49.535465] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.484 [2024-04-27 00:10:49.535596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.484 [2024-04-27 00:10:49.535660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.484 [2024-04-27 00:10:49.535687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.484 [2024-04-27 00:10:49.535707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc70000b90 00:26:19.484 [2024-04-27 00:10:49.535757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.484 qpair failed and we were unable to recover it. 00:26:19.484 [2024-04-27 00:10:49.545448] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.484 [2024-04-27 00:10:49.545541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.484 [2024-04-27 00:10:49.545571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.484 [2024-04-27 00:10:49.545587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.484 [2024-04-27 00:10:49.545606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc70000b90 00:26:19.484 [2024-04-27 00:10:49.545635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.484 qpair failed and we were unable to recover it. 00:26:19.484 [2024-04-27 00:10:49.555466] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.484 [2024-04-27 00:10:49.555537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.484 [2024-04-27 00:10:49.555562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.484 [2024-04-27 00:10:49.555571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.484 [2024-04-27 00:10:49.555578] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:19.484 [2024-04-27 00:10:49.555596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:19.484 qpair failed and we were unable to recover it. 00:26:19.484 [2024-04-27 00:10:49.565509] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.484 [2024-04-27 00:10:49.565574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.484 [2024-04-27 00:10:49.565592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.484 [2024-04-27 00:10:49.565600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.484 [2024-04-27 00:10:49.565606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1254650 00:26:19.484 [2024-04-27 00:10:49.565621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:19.484 qpair failed and we were unable to recover it. 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Write completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Write completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Write completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Write completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Write completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Write completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Write completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Read completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Write completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 Write completed with error (sct=0, sc=8) 00:26:19.484 starting I/O failed 00:26:19.484 [2024-04-27 00:10:49.566031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.484 [2024-04-27 00:10:49.575500] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.484 [2024-04-27 00:10:49.575558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.485 [2024-04-27 00:10:49.575578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.485 [2024-04-27 00:10:49.575586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.485 [2024-04-27 00:10:49.575593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc80000b90 00:26:19.485 [2024-04-27 00:10:49.575610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.485 qpair failed and we were unable to recover it. 00:26:19.485 [2024-04-27 00:10:49.585565] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.485 [2024-04-27 00:10:49.585622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.485 [2024-04-27 00:10:49.585638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.485 [2024-04-27 00:10:49.585646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.485 [2024-04-27 00:10:49.585652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbc80000b90 00:26:19.485 [2024-04-27 00:10:49.585667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.485 qpair failed and we were unable to recover it. 00:26:19.485 [2024-04-27 00:10:49.585800] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:19.485 A controller has encountered a failure and is being reset. 00:26:19.485 [2024-04-27 00:10:49.585922] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1262160 (9): Bad file descriptor 00:26:19.485 Controller properly reset. 00:26:19.485 Initializing NVMe Controllers 00:26:19.485 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:19.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:19.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:19.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:19.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:19.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:19.485 Initialization complete. Launching workers. 00:26:19.485 Starting thread on core 1 00:26:19.485 Starting thread on core 2 00:26:19.485 Starting thread on core 3 00:26:19.485 Starting thread on core 0 00:26:19.485 00:10:49 -- host/target_disconnect.sh@59 -- # sync 00:26:19.485 00:26:19.485 real 0m11.413s 00:26:19.485 user 0m21.531s 00:26:19.485 sys 0m3.826s 00:26:19.485 00:10:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:19.485 00:10:49 -- common/autotest_common.sh@10 -- # set +x 00:26:19.485 ************************************ 00:26:19.485 END TEST nvmf_target_disconnect_tc2 00:26:19.485 ************************************ 00:26:19.485 00:10:49 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:26:19.485 00:10:49 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:19.485 00:10:49 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:26:19.485 00:10:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:19.485 00:10:49 -- nvmf/common.sh@117 -- # sync 00:26:19.485 00:10:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.485 00:10:49 -- nvmf/common.sh@120 -- # set +e 00:26:19.485 00:10:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.485 00:10:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.746 rmmod nvme_tcp 00:26:19.746 rmmod nvme_fabrics 00:26:19.746 rmmod nvme_keyring 00:26:19.746 00:10:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.746 00:10:49 -- nvmf/common.sh@124 -- # set -e 00:26:19.746 00:10:49 -- nvmf/common.sh@125 -- # return 0 00:26:19.746 00:10:49 -- nvmf/common.sh@478 -- # '[' -n 555499 ']' 00:26:19.746 00:10:49 -- nvmf/common.sh@479 -- # killprocess 555499 00:26:19.746 00:10:49 -- common/autotest_common.sh@936 -- # '[' -z 555499 ']' 00:26:19.746 00:10:49 -- common/autotest_common.sh@940 -- # kill -0 555499 00:26:19.746 00:10:49 -- common/autotest_common.sh@941 -- # uname 00:26:19.746 00:10:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:19.746 00:10:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 555499 00:26:19.746 00:10:49 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:26:19.746 00:10:49 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:26:19.746 00:10:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 555499' 00:26:19.746 killing process with pid 555499 00:26:19.746 00:10:49 -- common/autotest_common.sh@955 -- # kill 555499 00:26:19.746 00:10:49 -- common/autotest_common.sh@960 -- # wait 555499 00:26:19.746 00:10:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:19.746 00:10:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:19.746 00:10:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:19.746 00:10:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.746 00:10:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.746 00:10:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.746 00:10:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.746 00:10:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.291 00:10:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:22.291 00:26:22.291 real 0m21.482s 00:26:22.291 user 0m49.489s 00:26:22.291 sys 0m9.539s 00:26:22.291 00:10:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:22.291 00:10:51 -- common/autotest_common.sh@10 -- # set +x 00:26:22.291 ************************************ 00:26:22.291 END TEST nvmf_target_disconnect 00:26:22.291 ************************************ 00:26:22.291 00:10:52 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:26:22.291 00:10:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:22.291 00:10:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.291 00:10:52 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:26:22.291 00:26:22.291 real 19m37.981s 00:26:22.291 user 40m11.681s 00:26:22.291 sys 6m30.892s 00:26:22.291 00:10:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:22.291 00:10:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.291 ************************************ 00:26:22.291 END TEST nvmf_tcp 00:26:22.291 ************************************ 00:26:22.291 00:10:52 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:26:22.291 00:10:52 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:22.291 00:10:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:22.291 00:10:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:22.291 00:10:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.291 ************************************ 00:26:22.291 START TEST spdkcli_nvmf_tcp 00:26:22.291 ************************************ 00:26:22.291 00:10:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:22.291 * Looking for test storage... 00:26:22.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:22.291 00:10:52 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:22.291 00:10:52 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:22.291 00:10:52 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:22.291 00:10:52 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.291 00:10:52 -- nvmf/common.sh@7 -- # uname -s 00:26:22.291 00:10:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.291 00:10:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.291 00:10:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.291 00:10:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.291 00:10:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.291 00:10:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.291 00:10:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.291 00:10:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.291 00:10:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.291 00:10:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.291 00:10:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:22.291 00:10:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:22.291 00:10:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.291 00:10:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.291 00:10:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.291 00:10:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.291 00:10:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.291 00:10:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.291 00:10:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.291 00:10:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.291 00:10:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.291 00:10:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.291 00:10:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.291 00:10:52 -- paths/export.sh@5 -- # export PATH 00:26:22.291 00:10:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.291 00:10:52 -- nvmf/common.sh@47 -- # : 0 00:26:22.291 00:10:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:22.291 00:10:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:22.291 00:10:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.291 00:10:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.291 00:10:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.291 00:10:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:22.291 00:10:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:22.291 00:10:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:22.291 00:10:52 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:22.291 00:10:52 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:22.291 00:10:52 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:22.291 00:10:52 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:22.291 00:10:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:22.291 00:10:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.291 00:10:52 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:22.291 00:10:52 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=557421 00:26:22.291 00:10:52 -- spdkcli/common.sh@34 -- # waitforlisten 557421 00:26:22.291 00:10:52 -- common/autotest_common.sh@817 -- # '[' -z 557421 ']' 00:26:22.291 00:10:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.291 00:10:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:22.291 00:10:52 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:22.291 00:10:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.291 00:10:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:22.291 00:10:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.291 [2024-04-27 00:10:52.465557] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:26:22.291 [2024-04-27 00:10:52.465635] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557421 ] 00:26:22.291 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.551 [2024-04-27 00:10:52.535155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:22.551 [2024-04-27 00:10:52.608805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.551 [2024-04-27 00:10:52.608805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.121 00:10:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:23.121 00:10:53 -- common/autotest_common.sh@850 -- # return 0 00:26:23.121 00:10:53 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:23.121 00:10:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:23.121 00:10:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.121 00:10:53 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:23.121 00:10:53 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:23.121 00:10:53 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:23.121 00:10:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:23.122 00:10:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.122 00:10:53 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:23.122 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:23.122 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:23.122 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:23.122 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:23.122 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:23.122 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:23.122 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:23.122 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:23.122 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:23.122 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:23.122 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:23.122 ' 00:26:23.382 [2024-04-27 00:10:53.597807] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:25.921 [2024-04-27 00:10:55.600451] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.861 [2024-04-27 00:10:56.764198] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:28.826 [2024-04-27 00:10:58.902392] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:30.738 [2024-04-27 00:11:00.735822] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:32.124 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:32.124 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:32.124 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:32.124 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:32.124 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:32.124 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:32.124 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:32.124 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:32.124 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:32.124 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:32.124 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:32.124 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:32.124 00:11:02 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:32.124 00:11:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:32.124 00:11:02 -- common/autotest_common.sh@10 -- # set +x 00:26:32.124 00:11:02 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:32.124 00:11:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:32.124 00:11:02 -- common/autotest_common.sh@10 -- # set +x 00:26:32.124 00:11:02 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:32.124 00:11:02 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:32.695 00:11:02 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:32.695 00:11:02 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:32.695 00:11:02 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:32.695 00:11:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:32.695 00:11:02 -- common/autotest_common.sh@10 -- # set +x 00:26:32.695 00:11:02 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:32.695 00:11:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:32.695 00:11:02 -- common/autotest_common.sh@10 -- # set +x 00:26:32.695 00:11:02 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:32.695 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:32.695 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:32.695 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:32.695 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:32.695 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:32.695 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:32.695 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:32.695 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:32.695 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:32.695 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:32.695 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:32.695 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:32.695 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:32.695 ' 00:26:37.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:37.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:37.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:37.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:37.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:37.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:37.979 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:37.979 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:37.979 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:37.979 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:37.979 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:37.979 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:37.979 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:37.979 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:37.979 00:11:07 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:37.979 00:11:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:37.979 00:11:07 -- common/autotest_common.sh@10 -- # set +x 00:26:37.979 00:11:07 -- spdkcli/nvmf.sh@90 -- # killprocess 557421 00:26:37.979 00:11:07 -- common/autotest_common.sh@936 -- # '[' -z 557421 ']' 00:26:37.979 00:11:07 -- common/autotest_common.sh@940 -- # kill -0 557421 00:26:37.979 00:11:07 -- common/autotest_common.sh@941 -- # uname 00:26:37.979 00:11:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:37.979 00:11:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 557421 00:26:37.979 00:11:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:37.979 00:11:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:37.979 00:11:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 557421' 00:26:37.979 killing process with pid 557421 00:26:37.979 00:11:07 -- common/autotest_common.sh@955 -- # kill 557421 00:26:37.979 [2024-04-27 00:11:07.647949] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:37.979 00:11:07 -- common/autotest_common.sh@960 -- # wait 557421 00:26:37.979 00:11:07 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:37.979 00:11:07 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:37.979 00:11:07 -- spdkcli/common.sh@13 -- # '[' -n 557421 ']' 00:26:37.979 00:11:07 -- spdkcli/common.sh@14 -- # killprocess 557421 00:26:37.979 00:11:07 -- common/autotest_common.sh@936 -- # '[' -z 557421 ']' 00:26:37.979 00:11:07 -- common/autotest_common.sh@940 -- # kill -0 557421 00:26:37.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (557421) - No such process 00:26:37.979 00:11:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 557421 is not found' 00:26:37.979 Process with pid 557421 is not found 00:26:37.980 00:11:07 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:37.980 00:11:07 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:37.980 00:11:07 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:37.980 00:26:37.980 real 0m15.508s 00:26:37.980 user 0m31.886s 00:26:37.980 sys 0m0.682s 00:26:37.980 00:11:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:37.980 00:11:07 -- common/autotest_common.sh@10 -- # set +x 00:26:37.980 ************************************ 00:26:37.980 END TEST spdkcli_nvmf_tcp 00:26:37.980 ************************************ 00:26:37.980 00:11:07 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:37.980 00:11:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:37.980 00:11:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:37.980 00:11:07 -- common/autotest_common.sh@10 -- # set +x 00:26:37.980 ************************************ 00:26:37.980 START TEST nvmf_identify_passthru 00:26:37.980 ************************************ 00:26:37.980 00:11:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:37.980 * Looking for test storage... 00:26:37.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:37.980 00:11:08 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.980 00:11:08 -- nvmf/common.sh@7 -- # uname -s 00:26:37.980 00:11:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.980 00:11:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.980 00:11:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.980 00:11:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.980 00:11:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.980 00:11:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.980 00:11:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.980 00:11:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.980 00:11:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.980 00:11:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.980 00:11:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:37.980 00:11:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:37.980 00:11:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.980 00:11:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.980 00:11:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.980 00:11:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.980 00:11:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.980 00:11:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.980 00:11:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.980 00:11:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.980 00:11:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.980 00:11:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.980 00:11:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.980 00:11:08 -- paths/export.sh@5 -- # export PATH 00:26:37.980 00:11:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.980 00:11:08 -- nvmf/common.sh@47 -- # : 0 00:26:37.980 00:11:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:37.980 00:11:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:37.980 00:11:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.980 00:11:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.980 00:11:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.980 00:11:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:37.980 00:11:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:37.980 00:11:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:37.980 00:11:08 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.980 00:11:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.980 00:11:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.980 00:11:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.980 00:11:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.980 00:11:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.980 00:11:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.980 00:11:08 -- paths/export.sh@5 -- # export PATH 00:26:37.980 00:11:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.980 00:11:08 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:37.980 00:11:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:37.980 00:11:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.980 00:11:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:37.980 00:11:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:37.980 00:11:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:37.980 00:11:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.980 00:11:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:37.980 00:11:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.980 00:11:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:37.980 00:11:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:37.980 00:11:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:37.980 00:11:08 -- common/autotest_common.sh@10 -- # set +x 00:26:46.119 00:11:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:46.119 00:11:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:46.119 00:11:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:46.119 00:11:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:46.119 00:11:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:46.119 00:11:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:46.119 00:11:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:46.119 00:11:14 -- nvmf/common.sh@295 -- # net_devs=() 00:26:46.119 00:11:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:46.119 00:11:14 -- nvmf/common.sh@296 -- # e810=() 00:26:46.119 00:11:14 -- nvmf/common.sh@296 -- # local -ga e810 00:26:46.119 00:11:14 -- nvmf/common.sh@297 -- # x722=() 00:26:46.119 00:11:14 -- nvmf/common.sh@297 -- # local -ga x722 00:26:46.119 00:11:14 -- nvmf/common.sh@298 -- # mlx=() 00:26:46.119 00:11:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:46.119 00:11:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.119 00:11:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.119 00:11:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.119 00:11:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.119 00:11:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.119 00:11:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.119 00:11:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.119 00:11:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.119 00:11:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.119 00:11:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.119 00:11:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.119 00:11:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:46.119 00:11:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:46.119 00:11:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:46.119 00:11:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.119 00:11:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:46.119 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:46.119 00:11:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.119 00:11:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:46.119 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:46.119 00:11:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:46.119 00:11:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:46.119 00:11:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.119 00:11:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.119 00:11:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:46.120 00:11:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.120 00:11:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:46.120 Found net devices under 0000:31:00.0: cvl_0_0 00:26:46.120 00:11:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.120 00:11:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.120 00:11:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.120 00:11:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:46.120 00:11:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.120 00:11:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:46.120 Found net devices under 0000:31:00.1: cvl_0_1 00:26:46.120 00:11:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.120 00:11:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:46.120 00:11:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:46.120 00:11:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:46.120 00:11:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:46.120 00:11:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:46.120 00:11:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.120 00:11:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.120 00:11:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.120 00:11:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:46.120 00:11:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.120 00:11:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.120 00:11:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:46.120 00:11:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.120 00:11:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.120 00:11:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:46.120 00:11:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:46.120 00:11:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.120 00:11:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:46.120 00:11:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:46.120 00:11:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:46.120 00:11:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:46.120 00:11:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:46.120 00:11:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:46.120 00:11:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:46.120 00:11:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:46.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:26:46.120 00:26:46.120 --- 10.0.0.2 ping statistics --- 00:26:46.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.120 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:26:46.120 00:11:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:46.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:26:46.120 00:26:46.120 --- 10.0.0.1 ping statistics --- 00:26:46.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.120 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:26:46.120 00:11:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.120 00:11:15 -- nvmf/common.sh@411 -- # return 0 00:26:46.120 00:11:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:46.120 00:11:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.120 00:11:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:46.120 00:11:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:46.120 00:11:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.120 00:11:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:46.120 00:11:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:46.120 00:11:15 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:46.120 00:11:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:46.120 00:11:15 -- common/autotest_common.sh@10 -- # set +x 00:26:46.120 00:11:15 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:46.120 00:11:15 -- common/autotest_common.sh@1510 -- # bdfs=() 00:26:46.120 00:11:15 -- common/autotest_common.sh@1510 -- # local bdfs 00:26:46.120 00:11:15 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:26:46.120 00:11:15 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:26:46.120 00:11:15 -- common/autotest_common.sh@1499 -- # bdfs=() 00:26:46.120 00:11:15 -- common/autotest_common.sh@1499 -- # local bdfs 00:26:46.120 00:11:15 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:46.120 00:11:15 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:46.120 00:11:15 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:26:46.120 00:11:15 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:26:46.120 00:11:15 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:26:46.120 00:11:15 -- common/autotest_common.sh@1513 -- # echo 0000:65:00.0 00:26:46.120 00:11:15 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:26:46.120 00:11:15 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:26:46.120 00:11:15 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:26:46.120 00:11:15 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:46.120 00:11:15 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:46.120 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.120 00:11:15 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:26:46.120 00:11:15 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:26:46.120 00:11:15 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:46.120 00:11:15 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:46.120 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.120 00:11:16 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:26:46.120 00:11:16 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:46.120 00:11:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:46.120 00:11:16 -- common/autotest_common.sh@10 -- # set +x 00:26:46.120 00:11:16 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:46.120 00:11:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:46.120 00:11:16 -- common/autotest_common.sh@10 -- # set +x 00:26:46.381 00:11:16 -- target/identify_passthru.sh@31 -- # nvmfpid=564276 00:26:46.381 00:11:16 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:46.381 00:11:16 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:46.381 00:11:16 -- target/identify_passthru.sh@35 -- # waitforlisten 564276 00:26:46.381 00:11:16 -- common/autotest_common.sh@817 -- # '[' -z 564276 ']' 00:26:46.381 00:11:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.381 00:11:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:46.381 00:11:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.381 00:11:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:46.381 00:11:16 -- common/autotest_common.sh@10 -- # set +x 00:26:46.381 [2024-04-27 00:11:16.387464] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:26:46.381 [2024-04-27 00:11:16.387516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.381 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.381 [2024-04-27 00:11:16.453296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:46.381 [2024-04-27 00:11:16.520434] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.381 [2024-04-27 00:11:16.520470] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.381 [2024-04-27 00:11:16.520477] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.381 [2024-04-27 00:11:16.520484] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.381 [2024-04-27 00:11:16.520490] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.381 [2024-04-27 00:11:16.520797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.381 [2024-04-27 00:11:16.520905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.381 [2024-04-27 00:11:16.521202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.381 [2024-04-27 00:11:16.521203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.951 00:11:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:46.951 00:11:17 -- common/autotest_common.sh@850 -- # return 0 00:26:46.951 00:11:17 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:46.951 00:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.951 00:11:17 -- common/autotest_common.sh@10 -- # set +x 00:26:46.951 INFO: Log level set to 20 00:26:46.951 INFO: Requests: 00:26:46.951 { 00:26:46.951 "jsonrpc": "2.0", 00:26:46.951 "method": "nvmf_set_config", 00:26:46.951 "id": 1, 00:26:46.951 "params": { 00:26:46.951 "admin_cmd_passthru": { 00:26:46.951 "identify_ctrlr": true 00:26:46.951 } 00:26:46.951 } 00:26:46.951 } 00:26:46.951 00:26:47.211 INFO: response: 00:26:47.211 { 00:26:47.211 "jsonrpc": "2.0", 00:26:47.211 "id": 1, 00:26:47.211 "result": true 00:26:47.211 } 00:26:47.211 00:26:47.211 00:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.211 00:11:17 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:47.211 00:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.211 00:11:17 -- common/autotest_common.sh@10 -- # set +x 00:26:47.211 INFO: Setting log level to 20 00:26:47.211 INFO: Setting log level to 20 00:26:47.211 INFO: Log level set to 20 00:26:47.211 INFO: Log level set to 20 00:26:47.211 INFO: Requests: 00:26:47.211 { 00:26:47.211 "jsonrpc": "2.0", 00:26:47.211 "method": "framework_start_init", 00:26:47.211 "id": 1 00:26:47.211 } 00:26:47.211 00:26:47.211 INFO: Requests: 00:26:47.211 { 00:26:47.211 "jsonrpc": "2.0", 00:26:47.211 "method": "framework_start_init", 00:26:47.211 "id": 1 00:26:47.211 } 00:26:47.211 00:26:47.211 [2024-04-27 00:11:17.241252] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:47.211 INFO: response: 00:26:47.211 { 00:26:47.211 "jsonrpc": "2.0", 00:26:47.211 "id": 1, 00:26:47.211 "result": true 00:26:47.211 } 00:26:47.211 00:26:47.211 INFO: response: 00:26:47.211 { 00:26:47.211 "jsonrpc": "2.0", 00:26:47.211 "id": 1, 00:26:47.211 "result": true 00:26:47.211 } 00:26:47.211 00:26:47.211 00:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.211 00:11:17 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:47.211 00:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.211 00:11:17 -- common/autotest_common.sh@10 -- # set +x 00:26:47.211 INFO: Setting log level to 40 00:26:47.211 INFO: Setting log level to 40 00:26:47.211 INFO: Setting log level to 40 00:26:47.211 [2024-04-27 00:11:17.254492] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.211 00:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.211 00:11:17 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:47.211 00:11:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:47.211 00:11:17 -- common/autotest_common.sh@10 -- # set +x 00:26:47.211 00:11:17 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:26:47.211 00:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.211 00:11:17 -- common/autotest_common.sh@10 -- # set +x 00:26:47.472 Nvme0n1 00:26:47.472 00:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.472 00:11:17 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:47.472 00:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.472 00:11:17 -- common/autotest_common.sh@10 -- # set +x 00:26:47.472 00:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.472 00:11:17 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:47.472 00:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.472 00:11:17 -- common/autotest_common.sh@10 -- # set +x 00:26:47.472 00:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.472 00:11:17 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.472 00:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.472 00:11:17 -- common/autotest_common.sh@10 -- # set +x 00:26:47.472 [2024-04-27 00:11:17.636084] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.472 00:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.472 00:11:17 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:47.472 00:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.472 00:11:17 -- common/autotest_common.sh@10 -- # set +x 00:26:47.472 [2024-04-27 00:11:17.643824] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:47.472 [ 00:26:47.472 { 00:26:47.472 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:47.472 "subtype": "Discovery", 00:26:47.472 "listen_addresses": [], 00:26:47.472 "allow_any_host": true, 00:26:47.472 "hosts": [] 00:26:47.472 }, 00:26:47.472 { 00:26:47.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:47.472 "subtype": "NVMe", 00:26:47.472 "listen_addresses": [ 00:26:47.472 { 00:26:47.472 "transport": "TCP", 00:26:47.472 "trtype": "TCP", 00:26:47.472 "adrfam": "IPv4", 00:26:47.472 "traddr": "10.0.0.2", 00:26:47.472 "trsvcid": "4420" 00:26:47.472 } 00:26:47.472 ], 00:26:47.472 "allow_any_host": true, 00:26:47.472 "hosts": [], 00:26:47.472 "serial_number": "SPDK00000000000001", 00:26:47.472 "model_number": "SPDK bdev Controller", 00:26:47.472 "max_namespaces": 1, 00:26:47.472 "min_cntlid": 1, 00:26:47.472 "max_cntlid": 65519, 00:26:47.472 "namespaces": [ 00:26:47.472 { 00:26:47.472 "nsid": 1, 00:26:47.472 "bdev_name": "Nvme0n1", 00:26:47.472 "name": "Nvme0n1", 00:26:47.472 "nguid": "3634473052605494002538450000001F", 00:26:47.472 "uuid": "36344730-5260-5494-0025-38450000001f" 00:26:47.472 } 00:26:47.472 ] 00:26:47.472 } 00:26:47.472 ] 00:26:47.472 00:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.472 00:11:17 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:47.472 00:11:17 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:47.472 00:11:17 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:47.472 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.732 00:11:17 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:26:47.732 00:11:17 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:47.732 00:11:17 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:47.732 00:11:17 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:47.732 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.992 00:11:18 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:26:47.992 00:11:18 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:26:47.992 00:11:18 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:26:47.992 00:11:18 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:47.992 00:11:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.992 00:11:18 -- common/autotest_common.sh@10 -- # set +x 00:26:47.992 00:11:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.992 00:11:18 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:47.992 00:11:18 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:47.992 00:11:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:47.992 00:11:18 -- nvmf/common.sh@117 -- # sync 00:26:47.992 00:11:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:47.992 00:11:18 -- nvmf/common.sh@120 -- # set +e 00:26:47.992 00:11:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:47.992 00:11:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:47.992 rmmod nvme_tcp 00:26:47.992 rmmod nvme_fabrics 00:26:47.992 rmmod nvme_keyring 00:26:47.992 00:11:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:47.992 00:11:18 -- nvmf/common.sh@124 -- # set -e 00:26:47.992 00:11:18 -- nvmf/common.sh@125 -- # return 0 00:26:47.992 00:11:18 -- nvmf/common.sh@478 -- # '[' -n 564276 ']' 00:26:47.992 00:11:18 -- nvmf/common.sh@479 -- # killprocess 564276 00:26:47.992 00:11:18 -- common/autotest_common.sh@936 -- # '[' -z 564276 ']' 00:26:47.992 00:11:18 -- common/autotest_common.sh@940 -- # kill -0 564276 00:26:47.992 00:11:18 -- common/autotest_common.sh@941 -- # uname 00:26:47.992 00:11:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:47.992 00:11:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 564276 00:26:47.992 00:11:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:47.992 00:11:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:47.992 00:11:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 564276' 00:26:47.992 killing process with pid 564276 00:26:47.992 00:11:18 -- common/autotest_common.sh@955 -- # kill 564276 00:26:47.992 [2024-04-27 00:11:18.144762] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:47.992 00:11:18 -- common/autotest_common.sh@960 -- # wait 564276 00:26:48.252 00:11:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:48.252 00:11:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:48.252 00:11:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:48.252 00:11:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:48.252 00:11:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:48.252 00:11:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.252 00:11:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:48.252 00:11:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.819 00:11:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:50.819 00:26:50.819 real 0m12.514s 00:26:50.819 user 0m9.822s 00:26:50.819 sys 0m6.005s 00:26:50.819 00:11:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:50.819 00:11:20 -- common/autotest_common.sh@10 -- # set +x 00:26:50.819 ************************************ 00:26:50.819 END TEST nvmf_identify_passthru 00:26:50.819 ************************************ 00:26:50.819 00:11:20 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:50.819 00:11:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:50.819 00:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:50.819 00:11:20 -- common/autotest_common.sh@10 -- # set +x 00:26:50.819 ************************************ 00:26:50.819 START TEST nvmf_dif 00:26:50.819 ************************************ 00:26:50.819 00:11:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:50.819 * Looking for test storage... 00:26:50.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:50.819 00:11:20 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.819 00:11:20 -- nvmf/common.sh@7 -- # uname -s 00:26:50.819 00:11:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.819 00:11:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.819 00:11:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.819 00:11:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.819 00:11:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.819 00:11:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.819 00:11:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.819 00:11:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.819 00:11:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.819 00:11:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.819 00:11:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:50.819 00:11:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:50.819 00:11:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.819 00:11:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.819 00:11:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.819 00:11:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.819 00:11:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.819 00:11:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.819 00:11:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.819 00:11:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.819 00:11:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.819 00:11:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.819 00:11:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.819 00:11:20 -- paths/export.sh@5 -- # export PATH 00:26:50.819 00:11:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.819 00:11:20 -- nvmf/common.sh@47 -- # : 0 00:26:50.819 00:11:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:50.819 00:11:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:50.819 00:11:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.819 00:11:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.819 00:11:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.819 00:11:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:50.819 00:11:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:50.819 00:11:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:50.819 00:11:20 -- target/dif.sh@15 -- # NULL_META=16 00:26:50.819 00:11:20 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:50.819 00:11:20 -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:50.819 00:11:20 -- target/dif.sh@15 -- # NULL_DIF=1 00:26:50.819 00:11:20 -- target/dif.sh@135 -- # nvmftestinit 00:26:50.819 00:11:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:50.819 00:11:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.819 00:11:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:50.819 00:11:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:50.819 00:11:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:50.819 00:11:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.819 00:11:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:50.819 00:11:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.819 00:11:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:50.819 00:11:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:50.819 00:11:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:50.819 00:11:20 -- common/autotest_common.sh@10 -- # set +x 00:26:58.964 00:11:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:58.964 00:11:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:58.964 00:11:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:58.964 00:11:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:58.964 00:11:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:58.964 00:11:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:58.964 00:11:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:58.964 00:11:27 -- nvmf/common.sh@295 -- # net_devs=() 00:26:58.964 00:11:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:58.964 00:11:27 -- nvmf/common.sh@296 -- # e810=() 00:26:58.964 00:11:27 -- nvmf/common.sh@296 -- # local -ga e810 00:26:58.964 00:11:27 -- nvmf/common.sh@297 -- # x722=() 00:26:58.964 00:11:27 -- nvmf/common.sh@297 -- # local -ga x722 00:26:58.964 00:11:27 -- nvmf/common.sh@298 -- # mlx=() 00:26:58.964 00:11:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:58.964 00:11:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.964 00:11:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.964 00:11:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.964 00:11:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.964 00:11:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.964 00:11:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.964 00:11:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.964 00:11:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.964 00:11:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.964 00:11:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.964 00:11:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.964 00:11:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:58.964 00:11:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:58.964 00:11:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:58.965 00:11:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:58.965 00:11:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.965 00:11:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:58.965 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:58.965 00:11:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.965 00:11:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:58.965 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:58.965 00:11:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:58.965 00:11:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.965 00:11:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.965 00:11:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:58.965 00:11:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.965 00:11:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:58.965 Found net devices under 0000:31:00.0: cvl_0_0 00:26:58.965 00:11:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.965 00:11:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.965 00:11:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.965 00:11:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:58.965 00:11:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.965 00:11:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:58.965 Found net devices under 0000:31:00.1: cvl_0_1 00:26:58.965 00:11:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.965 00:11:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:58.965 00:11:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:58.965 00:11:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:58.965 00:11:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:58.965 00:11:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.965 00:11:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.965 00:11:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.965 00:11:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:58.965 00:11:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.965 00:11:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.965 00:11:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:58.965 00:11:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.965 00:11:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.965 00:11:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:58.965 00:11:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:58.965 00:11:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.965 00:11:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.965 00:11:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.965 00:11:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.965 00:11:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:58.965 00:11:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.965 00:11:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.965 00:11:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.965 00:11:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:58.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:26:58.965 00:26:58.965 --- 10.0.0.2 ping statistics --- 00:26:58.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.965 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:26:58.965 00:11:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:26:58.965 00:26:58.965 --- 10.0.0.1 ping statistics --- 00:26:58.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.965 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:26:58.965 00:11:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.965 00:11:27 -- nvmf/common.sh@411 -- # return 0 00:26:58.965 00:11:27 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:26:58.965 00:11:27 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:01.537 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:01.537 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:01.537 00:11:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.537 00:11:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:01.537 00:11:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:01.537 00:11:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.537 00:11:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:01.537 00:11:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:01.537 00:11:31 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:01.537 00:11:31 -- target/dif.sh@137 -- # nvmfappstart 00:27:01.537 00:11:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:01.537 00:11:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:01.537 00:11:31 -- common/autotest_common.sh@10 -- # set +x 00:27:01.537 00:11:31 -- nvmf/common.sh@470 -- # nvmfpid=570508 00:27:01.537 00:11:31 -- nvmf/common.sh@471 -- # waitforlisten 570508 00:27:01.537 00:11:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:01.537 00:11:31 -- common/autotest_common.sh@817 -- # '[' -z 570508 ']' 00:27:01.537 00:11:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.537 00:11:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:01.537 00:11:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.537 00:11:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:01.537 00:11:31 -- common/autotest_common.sh@10 -- # set +x 00:27:01.537 [2024-04-27 00:11:31.706099] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:27:01.537 [2024-04-27 00:11:31.706148] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.537 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.798 [2024-04-27 00:11:31.774050] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.798 [2024-04-27 00:11:31.839671] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.798 [2024-04-27 00:11:31.839718] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.798 [2024-04-27 00:11:31.839726] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.798 [2024-04-27 00:11:31.839732] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.798 [2024-04-27 00:11:31.839737] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.798 [2024-04-27 00:11:31.839756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.369 00:11:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:02.369 00:11:32 -- common/autotest_common.sh@850 -- # return 0 00:27:02.369 00:11:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:02.369 00:11:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:02.369 00:11:32 -- common/autotest_common.sh@10 -- # set +x 00:27:02.369 00:11:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.369 00:11:32 -- target/dif.sh@139 -- # create_transport 00:27:02.369 00:11:32 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:02.369 00:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.369 00:11:32 -- common/autotest_common.sh@10 -- # set +x 00:27:02.369 [2024-04-27 00:11:32.514251] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.369 00:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.369 00:11:32 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:02.369 00:11:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:02.369 00:11:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:02.369 00:11:32 -- common/autotest_common.sh@10 -- # set +x 00:27:02.629 ************************************ 00:27:02.629 START TEST fio_dif_1_default 00:27:02.629 ************************************ 00:27:02.629 00:11:32 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:27:02.629 00:11:32 -- target/dif.sh@86 -- # create_subsystems 0 00:27:02.629 00:11:32 -- target/dif.sh@28 -- # local sub 00:27:02.629 00:11:32 -- target/dif.sh@30 -- # for sub in "$@" 00:27:02.629 00:11:32 -- target/dif.sh@31 -- # create_subsystem 0 00:27:02.629 00:11:32 -- target/dif.sh@18 -- # local sub_id=0 00:27:02.629 00:11:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:02.629 00:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.629 00:11:32 -- common/autotest_common.sh@10 -- # set +x 00:27:02.629 bdev_null0 00:27:02.629 00:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.629 00:11:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:02.629 00:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.629 00:11:32 -- common/autotest_common.sh@10 -- # set +x 00:27:02.629 00:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.629 00:11:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:02.629 00:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.629 00:11:32 -- common/autotest_common.sh@10 -- # set +x 00:27:02.629 00:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.629 00:11:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:02.629 00:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.629 00:11:32 -- common/autotest_common.sh@10 -- # set +x 00:27:02.629 [2024-04-27 00:11:32.714920] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.629 00:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.629 00:11:32 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:02.629 00:11:32 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:02.629 00:11:32 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:02.629 00:11:32 -- nvmf/common.sh@521 -- # config=() 00:27:02.629 00:11:32 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:02.629 00:11:32 -- nvmf/common.sh@521 -- # local subsystem config 00:27:02.629 00:11:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:02.629 00:11:32 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:02.629 00:11:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:02.629 { 00:27:02.629 "params": { 00:27:02.629 "name": "Nvme$subsystem", 00:27:02.629 "trtype": "$TEST_TRANSPORT", 00:27:02.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.629 "adrfam": "ipv4", 00:27:02.629 "trsvcid": "$NVMF_PORT", 00:27:02.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.629 "hdgst": ${hdgst:-false}, 00:27:02.629 "ddgst": ${ddgst:-false} 00:27:02.629 }, 00:27:02.629 "method": "bdev_nvme_attach_controller" 00:27:02.629 } 00:27:02.629 EOF 00:27:02.629 )") 00:27:02.629 00:11:32 -- target/dif.sh@82 -- # gen_fio_conf 00:27:02.629 00:11:32 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:02.629 00:11:32 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:02.629 00:11:32 -- target/dif.sh@54 -- # local file 00:27:02.629 00:11:32 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:02.629 00:11:32 -- target/dif.sh@56 -- # cat 00:27:02.629 00:11:32 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:02.629 00:11:32 -- common/autotest_common.sh@1327 -- # shift 00:27:02.629 00:11:32 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:02.629 00:11:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:02.629 00:11:32 -- nvmf/common.sh@543 -- # cat 00:27:02.629 00:11:32 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:02.630 00:11:32 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:02.630 00:11:32 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:02.630 00:11:32 -- target/dif.sh@72 -- # (( file <= files )) 00:27:02.630 00:11:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:02.630 00:11:32 -- nvmf/common.sh@545 -- # jq . 00:27:02.630 00:11:32 -- nvmf/common.sh@546 -- # IFS=, 00:27:02.630 00:11:32 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:02.630 "params": { 00:27:02.630 "name": "Nvme0", 00:27:02.630 "trtype": "tcp", 00:27:02.630 "traddr": "10.0.0.2", 00:27:02.630 "adrfam": "ipv4", 00:27:02.630 "trsvcid": "4420", 00:27:02.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:02.630 "hdgst": false, 00:27:02.630 "ddgst": false 00:27:02.630 }, 00:27:02.630 "method": "bdev_nvme_attach_controller" 00:27:02.630 }' 00:27:02.630 00:11:32 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:02.630 00:11:32 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:02.630 00:11:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:02.630 00:11:32 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:02.630 00:11:32 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:02.630 00:11:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:02.630 00:11:32 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:02.630 00:11:32 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:02.630 00:11:32 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:02.630 00:11:32 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:03.216 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:03.216 fio-3.35 00:27:03.216 Starting 1 thread 00:27:03.216 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.462 00:27:15.462 filename0: (groupid=0, jobs=1): err= 0: pid=571049: Sat Apr 27 00:11:43 2024 00:27:15.462 read: IOPS=186, BW=746KiB/s (764kB/s)(7472KiB/10016msec) 00:27:15.462 slat (nsec): min=5322, max=30662, avg=6070.58, stdev=1725.65 00:27:15.462 clat (usec): min=708, max=42155, avg=21430.99, stdev=20472.82 00:27:15.462 lat (usec): min=714, max=42161, avg=21437.06, stdev=20472.76 00:27:15.462 clat percentiles (usec): 00:27:15.462 | 1.00th=[ 898], 5.00th=[ 971], 10.00th=[ 988], 20.00th=[ 1004], 00:27:15.462 | 30.00th=[ 1012], 40.00th=[ 1020], 50.00th=[ 1958], 60.00th=[41681], 00:27:15.462 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:15.462 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:15.462 | 99.99th=[42206] 00:27:15.462 bw ( KiB/s): min= 704, max= 768, per=99.87%, avg=745.60, stdev=29.55, samples=20 00:27:15.462 iops : min= 176, max= 192, avg=186.40, stdev= 7.39, samples=20 00:27:15.462 lat (usec) : 750=0.21%, 1000=19.70% 00:27:15.462 lat (msec) : 2=30.19%, 50=49.89% 00:27:15.462 cpu : usr=94.90%, sys=4.91%, ctx=10, majf=0, minf=217 00:27:15.462 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:15.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.462 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.462 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:15.462 00:27:15.462 Run status group 0 (all jobs): 00:27:15.462 READ: bw=746KiB/s (764kB/s), 746KiB/s-746KiB/s (764kB/s-764kB/s), io=7472KiB (7651kB), run=10016-10016msec 00:27:15.462 00:11:43 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:15.462 00:11:43 -- target/dif.sh@43 -- # local sub 00:27:15.462 00:11:43 -- target/dif.sh@45 -- # for sub in "$@" 00:27:15.462 00:11:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:15.462 00:11:43 -- target/dif.sh@36 -- # local sub_id=0 00:27:15.462 00:11:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:15.462 00:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.462 00:11:43 -- common/autotest_common.sh@10 -- # set +x 00:27:15.462 00:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.462 00:11:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:15.462 00:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.462 00:11:43 -- common/autotest_common.sh@10 -- # set +x 00:27:15.463 00:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.463 00:27:15.463 real 0m11.205s 00:27:15.463 user 0m25.844s 00:27:15.463 sys 0m0.808s 00:27:15.463 00:11:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:15.463 00:11:43 -- common/autotest_common.sh@10 -- # set +x 00:27:15.463 ************************************ 00:27:15.463 END TEST fio_dif_1_default 00:27:15.463 ************************************ 00:27:15.463 00:11:43 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:15.463 00:11:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:15.463 00:11:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:15.463 00:11:43 -- common/autotest_common.sh@10 -- # set +x 00:27:15.463 ************************************ 00:27:15.463 START TEST fio_dif_1_multi_subsystems 00:27:15.463 ************************************ 00:27:15.463 00:11:44 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:27:15.463 00:11:44 -- target/dif.sh@92 -- # local files=1 00:27:15.463 00:11:44 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:15.463 00:11:44 -- target/dif.sh@28 -- # local sub 00:27:15.463 00:11:44 -- target/dif.sh@30 -- # for sub in "$@" 00:27:15.463 00:11:44 -- target/dif.sh@31 -- # create_subsystem 0 00:27:15.463 00:11:44 -- target/dif.sh@18 -- # local sub_id=0 00:27:15.463 00:11:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:15.463 00:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.463 00:11:44 -- common/autotest_common.sh@10 -- # set +x 00:27:15.463 bdev_null0 00:27:15.463 00:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.463 00:11:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:15.463 00:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.463 00:11:44 -- common/autotest_common.sh@10 -- # set +x 00:27:15.463 00:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.463 00:11:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:15.463 00:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.463 00:11:44 -- common/autotest_common.sh@10 -- # set +x 00:27:15.463 00:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.463 00:11:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:15.463 00:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.463 00:11:44 -- common/autotest_common.sh@10 -- # set +x 00:27:15.463 [2024-04-27 00:11:44.100481] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.463 00:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.463 00:11:44 -- target/dif.sh@30 -- # for sub in "$@" 00:27:15.463 00:11:44 -- target/dif.sh@31 -- # create_subsystem 1 00:27:15.463 00:11:44 -- target/dif.sh@18 -- # local sub_id=1 00:27:15.463 00:11:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:15.463 00:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.463 00:11:44 -- common/autotest_common.sh@10 -- # set +x 00:27:15.463 bdev_null1 00:27:15.463 00:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.463 00:11:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:15.463 00:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.463 00:11:44 -- common/autotest_common.sh@10 -- # set +x 00:27:15.463 00:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.463 00:11:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:15.463 00:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.463 00:11:44 -- common/autotest_common.sh@10 -- # set +x 00:27:15.463 00:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.463 00:11:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.463 00:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.463 00:11:44 -- common/autotest_common.sh@10 -- # set +x 00:27:15.463 00:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.463 00:11:44 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:15.463 00:11:44 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:15.463 00:11:44 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:15.463 00:11:44 -- nvmf/common.sh@521 -- # config=() 00:27:15.463 00:11:44 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:15.463 00:11:44 -- nvmf/common.sh@521 -- # local subsystem config 00:27:15.463 00:11:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:15.463 00:11:44 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:15.463 00:11:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:15.463 { 00:27:15.463 "params": { 00:27:15.463 "name": "Nvme$subsystem", 00:27:15.463 "trtype": "$TEST_TRANSPORT", 00:27:15.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.463 "adrfam": "ipv4", 00:27:15.463 "trsvcid": "$NVMF_PORT", 00:27:15.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.463 "hdgst": ${hdgst:-false}, 00:27:15.463 "ddgst": ${ddgst:-false} 00:27:15.463 }, 00:27:15.463 "method": "bdev_nvme_attach_controller" 00:27:15.463 } 00:27:15.463 EOF 00:27:15.463 )") 00:27:15.463 00:11:44 -- target/dif.sh@82 -- # gen_fio_conf 00:27:15.463 00:11:44 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:15.463 00:11:44 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:15.463 00:11:44 -- target/dif.sh@54 -- # local file 00:27:15.463 00:11:44 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:15.463 00:11:44 -- target/dif.sh@56 -- # cat 00:27:15.463 00:11:44 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:15.463 00:11:44 -- common/autotest_common.sh@1327 -- # shift 00:27:15.463 00:11:44 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:15.463 00:11:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.463 00:11:44 -- nvmf/common.sh@543 -- # cat 00:27:15.463 00:11:44 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:15.463 00:11:44 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:15.463 00:11:44 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:15.463 00:11:44 -- target/dif.sh@72 -- # (( file <= files )) 00:27:15.463 00:11:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:15.463 00:11:44 -- target/dif.sh@73 -- # cat 00:27:15.463 00:11:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:15.463 00:11:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:15.463 { 00:27:15.463 "params": { 00:27:15.463 "name": "Nvme$subsystem", 00:27:15.463 "trtype": "$TEST_TRANSPORT", 00:27:15.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.463 "adrfam": "ipv4", 00:27:15.463 "trsvcid": "$NVMF_PORT", 00:27:15.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.463 "hdgst": ${hdgst:-false}, 00:27:15.463 "ddgst": ${ddgst:-false} 00:27:15.463 }, 00:27:15.463 "method": "bdev_nvme_attach_controller" 00:27:15.463 } 00:27:15.463 EOF 00:27:15.463 )") 00:27:15.463 00:11:44 -- target/dif.sh@72 -- # (( file++ )) 00:27:15.463 00:11:44 -- target/dif.sh@72 -- # (( file <= files )) 00:27:15.463 00:11:44 -- nvmf/common.sh@543 -- # cat 00:27:15.463 00:11:44 -- nvmf/common.sh@545 -- # jq . 00:27:15.463 00:11:44 -- nvmf/common.sh@546 -- # IFS=, 00:27:15.463 00:11:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:15.463 "params": { 00:27:15.463 "name": "Nvme0", 00:27:15.463 "trtype": "tcp", 00:27:15.463 "traddr": "10.0.0.2", 00:27:15.463 "adrfam": "ipv4", 00:27:15.463 "trsvcid": "4420", 00:27:15.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:15.464 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:15.464 "hdgst": false, 00:27:15.464 "ddgst": false 00:27:15.464 }, 00:27:15.464 "method": "bdev_nvme_attach_controller" 00:27:15.464 },{ 00:27:15.464 "params": { 00:27:15.464 "name": "Nvme1", 00:27:15.464 "trtype": "tcp", 00:27:15.464 "traddr": "10.0.0.2", 00:27:15.464 "adrfam": "ipv4", 00:27:15.464 "trsvcid": "4420", 00:27:15.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:15.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:15.464 "hdgst": false, 00:27:15.464 "ddgst": false 00:27:15.464 }, 00:27:15.464 "method": "bdev_nvme_attach_controller" 00:27:15.464 }' 00:27:15.464 00:11:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:15.464 00:11:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:15.464 00:11:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.464 00:11:44 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:15.464 00:11:44 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:15.464 00:11:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:15.464 00:11:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:15.464 00:11:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:15.464 00:11:44 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:15.464 00:11:44 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:15.464 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:15.464 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:15.464 fio-3.35 00:27:15.464 Starting 2 threads 00:27:15.464 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.461 00:27:25.461 filename0: (groupid=0, jobs=1): err= 0: pid=573426: Sat Apr 27 00:11:55 2024 00:27:25.461 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10014msec) 00:27:25.461 slat (nsec): min=5314, max=71845, avg=5963.50, stdev=2738.73 00:27:25.461 clat (usec): min=40908, max=43014, avg=41881.55, stdev=325.22 00:27:25.461 lat (usec): min=40914, max=43020, avg=41887.51, stdev=325.29 00:27:25.461 clat percentiles (usec): 00:27:25.461 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:27:25.461 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:27:25.461 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:25.461 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:27:25.461 | 99.99th=[43254] 00:27:25.461 bw ( KiB/s): min= 352, max= 384, per=49.76%, avg=380.80, stdev= 9.85, samples=20 00:27:25.461 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:27:25.461 lat (msec) : 50=100.00% 00:27:25.461 cpu : usr=96.83%, sys=2.97%, ctx=19, majf=0, minf=99 00:27:25.461 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:25.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.461 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.461 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:25.461 filename1: (groupid=0, jobs=1): err= 0: pid=573427: Sat Apr 27 00:11:55 2024 00:27:25.461 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10013msec) 00:27:25.461 slat (nsec): min=5311, max=67051, avg=6302.23, stdev=2715.14 00:27:25.461 clat (usec): min=40910, max=42993, avg=41876.38, stdev=336.43 00:27:25.461 lat (usec): min=40916, max=42999, avg=41882.68, stdev=336.55 00:27:25.461 clat percentiles (usec): 00:27:25.461 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:27:25.461 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:27:25.461 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:25.461 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:27:25.461 | 99.99th=[43254] 00:27:25.461 bw ( KiB/s): min= 352, max= 384, per=49.76%, avg=380.80, stdev= 9.85, samples=20 00:27:25.461 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:27:25.461 lat (msec) : 50=100.00% 00:27:25.461 cpu : usr=96.88%, sys=2.92%, ctx=14, majf=0, minf=169 00:27:25.461 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:25.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.461 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.461 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:25.461 00:27:25.461 Run status group 0 (all jobs): 00:27:25.461 READ: bw=764KiB/s (782kB/s), 382KiB/s-382KiB/s (391kB/s-391kB/s), io=7648KiB (7832kB), run=10013-10014msec 00:27:25.461 00:11:55 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:25.461 00:11:55 -- target/dif.sh@43 -- # local sub 00:27:25.461 00:11:55 -- target/dif.sh@45 -- # for sub in "$@" 00:27:25.461 00:11:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:25.461 00:11:55 -- target/dif.sh@36 -- # local sub_id=0 00:27:25.461 00:11:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:25.461 00:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.461 00:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 00:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.461 00:11:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:25.461 00:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.461 00:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 00:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.461 00:11:55 -- target/dif.sh@45 -- # for sub in "$@" 00:27:25.461 00:11:55 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:25.461 00:11:55 -- target/dif.sh@36 -- # local sub_id=1 00:27:25.461 00:11:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.461 00:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.461 00:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 00:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.461 00:11:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:25.461 00:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.461 00:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 00:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.461 00:27:25.461 real 0m11.361s 00:27:25.461 user 0m31.287s 00:27:25.461 sys 0m0.960s 00:27:25.461 00:11:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:25.461 00:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 ************************************ 00:27:25.461 END TEST fio_dif_1_multi_subsystems 00:27:25.461 ************************************ 00:27:25.461 00:11:55 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:25.461 00:11:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:25.461 00:11:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:25.461 00:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 ************************************ 00:27:25.461 START TEST fio_dif_rand_params 00:27:25.461 ************************************ 00:27:25.461 00:11:55 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:27:25.461 00:11:55 -- target/dif.sh@100 -- # local NULL_DIF 00:27:25.461 00:11:55 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:25.461 00:11:55 -- target/dif.sh@103 -- # NULL_DIF=3 00:27:25.461 00:11:55 -- target/dif.sh@103 -- # bs=128k 00:27:25.461 00:11:55 -- target/dif.sh@103 -- # numjobs=3 00:27:25.461 00:11:55 -- target/dif.sh@103 -- # iodepth=3 00:27:25.461 00:11:55 -- target/dif.sh@103 -- # runtime=5 00:27:25.461 00:11:55 -- target/dif.sh@105 -- # create_subsystems 0 00:27:25.461 00:11:55 -- target/dif.sh@28 -- # local sub 00:27:25.461 00:11:55 -- target/dif.sh@30 -- # for sub in "$@" 00:27:25.461 00:11:55 -- target/dif.sh@31 -- # create_subsystem 0 00:27:25.461 00:11:55 -- target/dif.sh@18 -- # local sub_id=0 00:27:25.461 00:11:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:25.461 00:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.461 00:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 bdev_null0 00:27:25.461 00:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.461 00:11:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:25.461 00:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.461 00:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 00:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.461 00:11:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:25.461 00:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.461 00:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 00:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.461 00:11:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:25.461 00:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.461 00:11:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.461 [2024-04-27 00:11:55.648282] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.461 00:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.461 00:11:55 -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:25.462 00:11:55 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:25.462 00:11:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:25.462 00:11:55 -- nvmf/common.sh@521 -- # config=() 00:27:25.462 00:11:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:25.462 00:11:55 -- nvmf/common.sh@521 -- # local subsystem config 00:27:25.462 00:11:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:25.462 00:11:55 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:25.462 00:11:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:25.462 { 00:27:25.462 "params": { 00:27:25.462 "name": "Nvme$subsystem", 00:27:25.462 "trtype": "$TEST_TRANSPORT", 00:27:25.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.462 "adrfam": "ipv4", 00:27:25.462 "trsvcid": "$NVMF_PORT", 00:27:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.462 "hdgst": ${hdgst:-false}, 00:27:25.462 "ddgst": ${ddgst:-false} 00:27:25.462 }, 00:27:25.462 "method": "bdev_nvme_attach_controller" 00:27:25.462 } 00:27:25.462 EOF 00:27:25.462 )") 00:27:25.462 00:11:55 -- target/dif.sh@82 -- # gen_fio_conf 00:27:25.462 00:11:55 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:25.462 00:11:55 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:25.462 00:11:55 -- target/dif.sh@54 -- # local file 00:27:25.462 00:11:55 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:25.462 00:11:55 -- target/dif.sh@56 -- # cat 00:27:25.462 00:11:55 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:25.462 00:11:55 -- common/autotest_common.sh@1327 -- # shift 00:27:25.462 00:11:55 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:25.462 00:11:55 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.462 00:11:55 -- nvmf/common.sh@543 -- # cat 00:27:25.462 00:11:55 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:25.462 00:11:55 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:25.462 00:11:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:25.462 00:11:55 -- target/dif.sh@72 -- # (( file <= files )) 00:27:25.462 00:11:55 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:25.462 00:11:55 -- nvmf/common.sh@545 -- # jq . 00:27:25.462 00:11:55 -- nvmf/common.sh@546 -- # IFS=, 00:27:25.462 00:11:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:25.462 "params": { 00:27:25.462 "name": "Nvme0", 00:27:25.462 "trtype": "tcp", 00:27:25.462 "traddr": "10.0.0.2", 00:27:25.462 "adrfam": "ipv4", 00:27:25.462 "trsvcid": "4420", 00:27:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:25.462 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:25.462 "hdgst": false, 00:27:25.462 "ddgst": false 00:27:25.462 }, 00:27:25.462 "method": "bdev_nvme_attach_controller" 00:27:25.462 }' 00:27:25.746 00:11:55 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:25.746 00:11:55 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:25.746 00:11:55 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.746 00:11:55 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:25.746 00:11:55 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:25.746 00:11:55 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:25.746 00:11:55 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:25.746 00:11:55 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:25.746 00:11:55 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:25.746 00:11:55 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:26.010 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:26.010 ... 00:27:26.010 fio-3.35 00:27:26.010 Starting 3 threads 00:27:26.010 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.667 00:27:32.667 filename0: (groupid=0, jobs=1): err= 0: pid=575780: Sat Apr 27 00:12:01 2024 00:27:32.667 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(145MiB/5048msec) 00:27:32.667 slat (nsec): min=4486, max=50143, avg=8407.53, stdev=1294.37 00:27:32.667 clat (usec): min=5590, max=56227, avg=13019.76, stdev=9764.04 00:27:32.667 lat (usec): min=5598, max=56236, avg=13028.17, stdev=9764.05 00:27:32.667 clat percentiles (usec): 00:27:32.667 | 1.00th=[ 5997], 5.00th=[ 6652], 10.00th=[ 7570], 20.00th=[ 8586], 00:27:32.667 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10814], 60.00th=[11600], 00:27:32.667 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14746], 95.00th=[49021], 00:27:32.667 | 99.00th=[52691], 99.50th=[54264], 99.90th=[55837], 99.95th=[56361], 00:27:32.667 | 99.99th=[56361] 00:27:32.667 bw ( KiB/s): min=19968, max=35072, per=35.73%, avg=29599.60, stdev=4502.15, samples=10 00:27:32.667 iops : min= 156, max= 274, avg=231.20, stdev=35.17, samples=10 00:27:32.667 lat (msec) : 10=37.53%, 20=56.60%, 50=1.73%, 100=4.14% 00:27:32.667 cpu : usr=96.08%, sys=3.67%, ctx=17, majf=0, minf=98 00:27:32.667 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.667 issued rwts: total=1159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.667 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:32.667 filename0: (groupid=0, jobs=1): err= 0: pid=575781: Sat Apr 27 00:12:01 2024 00:27:32.667 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(124MiB/5024msec) 00:27:32.667 slat (nsec): min=2998, max=19431, avg=5946.79, stdev=669.23 00:27:32.667 clat (usec): min=4816, max=93484, avg=15182.60, stdev=13518.45 00:27:32.667 lat (usec): min=4822, max=93490, avg=15188.55, stdev=13518.44 00:27:32.667 clat percentiles (usec): 00:27:32.667 | 1.00th=[ 5407], 5.00th=[ 6980], 10.00th=[ 8094], 20.00th=[ 9110], 00:27:32.667 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:27:32.667 | 70.00th=[12518], 80.00th=[13698], 90.00th=[47973], 95.00th=[51119], 00:27:32.667 | 99.00th=[54789], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:27:32.667 | 99.99th=[93848] 00:27:32.667 bw ( KiB/s): min=21248, max=28928, per=30.56%, avg=25318.40, stdev=2529.81, samples=10 00:27:32.667 iops : min= 166, max= 226, avg=197.80, stdev=19.76, samples=10 00:27:32.667 lat (msec) : 10=32.66%, 20=57.06%, 50=3.23%, 100=7.06% 00:27:32.667 cpu : usr=96.66%, sys=3.11%, ctx=10, majf=0, minf=75 00:27:32.667 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.667 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.667 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:32.667 filename0: (groupid=0, jobs=1): err= 0: pid=575782: Sat Apr 27 00:12:01 2024 00:27:32.667 read: IOPS=223, BW=27.9MiB/s (29.2MB/s)(140MiB/5004msec) 00:27:32.667 slat (nsec): min=5318, max=29888, avg=5953.30, stdev=1144.25 00:27:32.667 clat (usec): min=5706, max=55571, avg=13442.53, stdev=10235.10 00:27:32.667 lat (usec): min=5711, max=55577, avg=13448.48, stdev=10235.10 00:27:32.667 clat percentiles (usec): 00:27:32.667 | 1.00th=[ 6259], 5.00th=[ 6915], 10.00th=[ 7767], 20.00th=[ 8717], 00:27:32.667 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[10945], 60.00th=[11600], 00:27:32.667 | 70.00th=[12387], 80.00th=[13435], 90.00th=[15270], 95.00th=[49021], 00:27:32.667 | 99.00th=[52691], 99.50th=[53740], 99.90th=[55313], 99.95th=[55313], 00:27:32.667 | 99.99th=[55313] 00:27:32.667 bw ( KiB/s): min=19200, max=34304, per=34.39%, avg=28492.80, stdev=4965.56, samples=10 00:27:32.667 iops : min= 150, max= 268, avg=222.60, stdev=38.79, samples=10 00:27:32.667 lat (msec) : 10=35.22%, 20=58.06%, 50=2.78%, 100=3.94% 00:27:32.667 cpu : usr=96.04%, sys=3.74%, ctx=15, majf=0, minf=127 00:27:32.667 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.667 issued rwts: total=1116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.667 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:32.667 00:27:32.667 Run status group 0 (all jobs): 00:27:32.667 READ: bw=80.9MiB/s (84.8MB/s), 24.7MiB/s-28.7MiB/s (25.9MB/s-30.1MB/s), io=408MiB (428MB), run=5004-5048msec 00:27:32.667 00:12:01 -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:32.667 00:12:01 -- target/dif.sh@43 -- # local sub 00:27:32.667 00:12:01 -- target/dif.sh@45 -- # for sub in "$@" 00:27:32.667 00:12:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:32.667 00:12:01 -- target/dif.sh@36 -- # local sub_id=0 00:27:32.667 00:12:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:32.667 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.667 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.667 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.667 00:12:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:32.667 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.667 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.667 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.667 00:12:01 -- target/dif.sh@109 -- # NULL_DIF=2 00:27:32.667 00:12:01 -- target/dif.sh@109 -- # bs=4k 00:27:32.667 00:12:01 -- target/dif.sh@109 -- # numjobs=8 00:27:32.667 00:12:01 -- target/dif.sh@109 -- # iodepth=16 00:27:32.667 00:12:01 -- target/dif.sh@109 -- # runtime= 00:27:32.667 00:12:01 -- target/dif.sh@109 -- # files=2 00:27:32.667 00:12:01 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:32.667 00:12:01 -- target/dif.sh@28 -- # local sub 00:27:32.667 00:12:01 -- target/dif.sh@30 -- # for sub in "$@" 00:27:32.667 00:12:01 -- target/dif.sh@31 -- # create_subsystem 0 00:27:32.667 00:12:01 -- target/dif.sh@18 -- # local sub_id=0 00:27:32.667 00:12:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:32.667 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.667 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.667 bdev_null0 00:27:32.667 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.667 00:12:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:32.667 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.668 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.668 00:12:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:32.668 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.668 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.668 00:12:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:32.668 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.668 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 [2024-04-27 00:12:01.819119] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.668 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.668 00:12:01 -- target/dif.sh@30 -- # for sub in "$@" 00:27:32.668 00:12:01 -- target/dif.sh@31 -- # create_subsystem 1 00:27:32.668 00:12:01 -- target/dif.sh@18 -- # local sub_id=1 00:27:32.668 00:12:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:32.668 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.668 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 bdev_null1 00:27:32.668 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.668 00:12:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:32.668 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.668 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.668 00:12:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:32.668 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.668 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.668 00:12:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.668 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.668 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.668 00:12:01 -- target/dif.sh@30 -- # for sub in "$@" 00:27:32.668 00:12:01 -- target/dif.sh@31 -- # create_subsystem 2 00:27:32.668 00:12:01 -- target/dif.sh@18 -- # local sub_id=2 00:27:32.668 00:12:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:32.668 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.668 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 bdev_null2 00:27:32.668 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.668 00:12:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:32.668 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.668 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.668 00:12:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:32.668 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.668 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.668 00:12:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:32.668 00:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:32.668 00:12:01 -- common/autotest_common.sh@10 -- # set +x 00:27:32.668 00:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:32.668 00:12:01 -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:32.668 00:12:01 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:32.668 00:12:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:32.668 00:12:01 -- nvmf/common.sh@521 -- # config=() 00:27:32.668 00:12:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.668 00:12:01 -- nvmf/common.sh@521 -- # local subsystem config 00:27:32.668 00:12:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:32.668 00:12:01 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.668 00:12:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:32.668 { 00:27:32.668 "params": { 00:27:32.668 "name": "Nvme$subsystem", 00:27:32.668 "trtype": "$TEST_TRANSPORT", 00:27:32.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.668 "adrfam": "ipv4", 00:27:32.668 "trsvcid": "$NVMF_PORT", 00:27:32.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.668 "hdgst": ${hdgst:-false}, 00:27:32.668 "ddgst": ${ddgst:-false} 00:27:32.668 }, 00:27:32.668 "method": "bdev_nvme_attach_controller" 00:27:32.668 } 00:27:32.668 EOF 00:27:32.668 )") 00:27:32.668 00:12:01 -- target/dif.sh@82 -- # gen_fio_conf 00:27:32.668 00:12:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:32.668 00:12:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:32.668 00:12:01 -- target/dif.sh@54 -- # local file 00:27:32.668 00:12:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:32.668 00:12:01 -- target/dif.sh@56 -- # cat 00:27:32.668 00:12:01 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:32.668 00:12:01 -- common/autotest_common.sh@1327 -- # shift 00:27:32.668 00:12:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:32.668 00:12:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.668 00:12:01 -- nvmf/common.sh@543 -- # cat 00:27:32.668 00:12:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:32.668 00:12:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:32.668 00:12:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:32.668 00:12:01 -- target/dif.sh@72 -- # (( file <= files )) 00:27:32.668 00:12:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:32.668 00:12:01 -- target/dif.sh@73 -- # cat 00:27:32.668 00:12:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:32.668 00:12:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:32.668 { 00:27:32.668 "params": { 00:27:32.668 "name": "Nvme$subsystem", 00:27:32.668 "trtype": "$TEST_TRANSPORT", 00:27:32.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.668 "adrfam": "ipv4", 00:27:32.668 "trsvcid": "$NVMF_PORT", 00:27:32.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.668 "hdgst": ${hdgst:-false}, 00:27:32.668 "ddgst": ${ddgst:-false} 00:27:32.668 }, 00:27:32.668 "method": "bdev_nvme_attach_controller" 00:27:32.668 } 00:27:32.668 EOF 00:27:32.668 )") 00:27:32.668 00:12:01 -- target/dif.sh@72 -- # (( file++ )) 00:27:32.668 00:12:01 -- target/dif.sh@72 -- # (( file <= files )) 00:27:32.668 00:12:01 -- target/dif.sh@73 -- # cat 00:27:32.668 00:12:01 -- nvmf/common.sh@543 -- # cat 00:27:32.668 00:12:01 -- target/dif.sh@72 -- # (( file++ )) 00:27:32.668 00:12:01 -- target/dif.sh@72 -- # (( file <= files )) 00:27:32.668 00:12:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:32.668 00:12:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:32.668 { 00:27:32.668 "params": { 00:27:32.668 "name": "Nvme$subsystem", 00:27:32.668 "trtype": "$TEST_TRANSPORT", 00:27:32.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.668 "adrfam": "ipv4", 00:27:32.668 "trsvcid": "$NVMF_PORT", 00:27:32.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.668 "hdgst": ${hdgst:-false}, 00:27:32.668 "ddgst": ${ddgst:-false} 00:27:32.668 }, 00:27:32.668 "method": "bdev_nvme_attach_controller" 00:27:32.668 } 00:27:32.668 EOF 00:27:32.668 )") 00:27:32.668 00:12:01 -- nvmf/common.sh@543 -- # cat 00:27:32.668 00:12:01 -- nvmf/common.sh@545 -- # jq . 00:27:32.669 00:12:01 -- nvmf/common.sh@546 -- # IFS=, 00:27:32.669 00:12:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:32.669 "params": { 00:27:32.669 "name": "Nvme0", 00:27:32.669 "trtype": "tcp", 00:27:32.669 "traddr": "10.0.0.2", 00:27:32.669 "adrfam": "ipv4", 00:27:32.669 "trsvcid": "4420", 00:27:32.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:32.669 "hdgst": false, 00:27:32.669 "ddgst": false 00:27:32.669 }, 00:27:32.669 "method": "bdev_nvme_attach_controller" 00:27:32.669 },{ 00:27:32.669 "params": { 00:27:32.669 "name": "Nvme1", 00:27:32.669 "trtype": "tcp", 00:27:32.669 "traddr": "10.0.0.2", 00:27:32.669 "adrfam": "ipv4", 00:27:32.669 "trsvcid": "4420", 00:27:32.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:32.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:32.669 "hdgst": false, 00:27:32.669 "ddgst": false 00:27:32.669 }, 00:27:32.669 "method": "bdev_nvme_attach_controller" 00:27:32.669 },{ 00:27:32.669 "params": { 00:27:32.669 "name": "Nvme2", 00:27:32.669 "trtype": "tcp", 00:27:32.669 "traddr": "10.0.0.2", 00:27:32.669 "adrfam": "ipv4", 00:27:32.669 "trsvcid": "4420", 00:27:32.669 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:32.669 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:32.669 "hdgst": false, 00:27:32.669 "ddgst": false 00:27:32.669 }, 00:27:32.669 "method": "bdev_nvme_attach_controller" 00:27:32.669 }' 00:27:32.669 00:12:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:32.669 00:12:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:32.669 00:12:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.669 00:12:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:32.669 00:12:01 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:32.669 00:12:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:32.669 00:12:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:32.669 00:12:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:32.669 00:12:02 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:32.669 00:12:02 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.669 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:32.669 ... 00:27:32.669 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:32.669 ... 00:27:32.669 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:32.669 ... 00:27:32.669 fio-3.35 00:27:32.669 Starting 24 threads 00:27:32.669 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.909 00:27:44.909 filename0: (groupid=0, jobs=1): err= 0: pid=577392: Sat Apr 27 00:12:13 2024 00:27:44.909 read: IOPS=560, BW=2241KiB/s (2295kB/s)(21.9MiB/10007msec) 00:27:44.909 slat (nsec): min=5507, max=65286, avg=8085.90, stdev=4127.51 00:27:44.909 clat (usec): min=1107, max=34358, avg=28490.07, stdev=6169.03 00:27:44.909 lat (usec): min=1131, max=34394, avg=28498.16, stdev=6168.29 00:27:44.909 clat percentiles (usec): 00:27:44.909 | 1.00th=[ 2769], 5.00th=[17957], 10.00th=[20579], 20.00th=[22938], 00:27:44.909 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.909 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:27:44.909 | 99.00th=[32637], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:27:44.909 | 99.99th=[34341] 00:27:44.909 bw ( KiB/s): min= 1920, max= 3120, per=4.59%, avg=2218.11, stdev=277.99, samples=19 00:27:44.909 iops : min= 480, max= 780, avg=554.42, stdev=69.44, samples=19 00:27:44.909 lat (msec) : 2=0.37%, 4=2.00%, 10=0.30%, 20=4.85%, 50=92.47% 00:27:44.909 cpu : usr=98.87%, sys=0.73%, ctx=119, majf=0, minf=55 00:27:44.909 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:44.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.909 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.909 issued rwts: total=5606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.909 filename0: (groupid=0, jobs=1): err= 0: pid=577393: Sat Apr 27 00:12:13 2024 00:27:44.909 read: IOPS=512, BW=2050KiB/s (2100kB/s)(20.1MiB/10020msec) 00:27:44.909 slat (usec): min=5, max=110, avg=14.38, stdev=12.85 00:27:44.909 clat (usec): min=2726, max=34316, avg=31100.41, stdev=3771.86 00:27:44.909 lat (usec): min=2737, max=34351, avg=31114.79, stdev=3771.96 00:27:44.909 clat percentiles (usec): 00:27:44.909 | 1.00th=[ 4293], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:27:44.909 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:27:44.909 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32900], 00:27:44.909 | 99.00th=[33162], 99.50th=[33424], 99.90th=[34341], 99.95th=[34341], 00:27:44.909 | 99.99th=[34341] 00:27:44.909 bw ( KiB/s): min= 1916, max= 2560, per=4.23%, avg=2047.30, stdev=137.70, samples=20 00:27:44.909 iops : min= 479, max= 640, avg=511.75, stdev=34.39, samples=20 00:27:44.909 lat (msec) : 4=0.93%, 10=0.62%, 50=98.44% 00:27:44.909 cpu : usr=99.22%, sys=0.45%, ctx=62, majf=0, minf=51 00:27:44.909 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.909 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.909 issued rwts: total=5136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.909 filename0: (groupid=0, jobs=1): err= 0: pid=577394: Sat Apr 27 00:12:13 2024 00:27:44.909 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10008msec) 00:27:44.909 slat (usec): min=5, max=185, avg=22.17, stdev=16.08 00:27:44.909 clat (usec): min=19598, max=62517, avg=31779.79, stdev=1889.32 00:27:44.909 lat (usec): min=19605, max=62536, avg=31801.97, stdev=1887.14 00:27:44.909 clat percentiles (usec): 00:27:44.909 | 1.00th=[30540], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.909 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.909 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:27:44.909 | 99.00th=[33424], 99.50th=[33817], 99.90th=[62653], 99.95th=[62653], 00:27:44.909 | 99.99th=[62653] 00:27:44.909 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=2000.32, stdev=76.12, samples=19 00:27:44.909 iops : min= 448, max= 512, avg=500.00, stdev=18.99, samples=19 00:27:44.909 lat (msec) : 20=0.08%, 50=99.60%, 100=0.32% 00:27:44.909 cpu : usr=99.06%, sys=0.59%, ctx=45, majf=0, minf=37 00:27:44.909 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.909 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.909 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.909 filename0: (groupid=0, jobs=1): err= 0: pid=577395: Sat Apr 27 00:12:13 2024 00:27:44.909 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10003msec) 00:27:44.909 slat (usec): min=5, max=143, avg=23.47, stdev=20.88 00:27:44.909 clat (usec): min=22505, max=35362, avg=31670.27, stdev=786.98 00:27:44.909 lat (usec): min=22537, max=35381, avg=31693.74, stdev=783.69 00:27:44.909 clat percentiles (usec): 00:27:44.909 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.909 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.909 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32900], 00:27:44.909 | 99.00th=[33424], 99.50th=[33424], 99.90th=[35390], 99.95th=[35390], 00:27:44.909 | 99.99th=[35390] 00:27:44.909 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=2007.05, stdev=60.78, samples=19 00:27:44.909 iops : min= 480, max= 512, avg=501.68, stdev=15.15, samples=19 00:27:44.909 lat (msec) : 50=100.00% 00:27:44.909 cpu : usr=99.00%, sys=0.60%, ctx=29, majf=0, minf=45 00:27:44.909 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.909 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.909 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.909 filename0: (groupid=0, jobs=1): err= 0: pid=577396: Sat Apr 27 00:12:13 2024 00:27:44.909 read: IOPS=506, BW=2028KiB/s (2076kB/s)(19.8MiB/10005msec) 00:27:44.909 slat (usec): min=5, max=104, avg=16.34, stdev=12.88 00:27:44.909 clat (usec): min=7435, max=49280, avg=31472.89, stdev=4155.51 00:27:44.909 lat (usec): min=7441, max=49286, avg=31489.23, stdev=4155.65 00:27:44.909 clat percentiles (usec): 00:27:44.909 | 1.00th=[16909], 5.00th=[23725], 10.00th=[28181], 20.00th=[31327], 00:27:44.909 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:27:44.909 | 70.00th=[31851], 80.00th=[32375], 90.00th=[33162], 95.00th=[36963], 00:27:44.909 | 99.00th=[45876], 99.50th=[47449], 99.90th=[49021], 99.95th=[49021], 00:27:44.909 | 99.99th=[49021] 00:27:44.909 bw ( KiB/s): min= 1920, max= 2192, per=4.18%, avg=2021.05, stdev=66.48, samples=19 00:27:44.909 iops : min= 480, max= 548, avg=505.26, stdev=16.62, samples=19 00:27:44.909 lat (msec) : 10=0.20%, 20=1.89%, 50=97.91% 00:27:44.909 cpu : usr=99.02%, sys=0.69%, ctx=11, majf=0, minf=46 00:27:44.909 IO depths : 1=0.6%, 2=1.9%, 4=7.0%, 8=74.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:27:44.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.909 complete : 0=0.0%, 4=90.5%, 8=7.4%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.909 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.909 filename0: (groupid=0, jobs=1): err= 0: pid=577397: Sat Apr 27 00:12:13 2024 00:27:44.909 read: IOPS=504, BW=2019KiB/s (2067kB/s)(19.7MiB/10015msec) 00:27:44.910 slat (usec): min=5, max=134, avg=25.64, stdev=21.16 00:27:44.910 clat (usec): min=17092, max=60312, avg=31474.09, stdev=2578.98 00:27:44.910 lat (usec): min=17100, max=60333, avg=31499.73, stdev=2579.08 00:27:44.910 clat percentiles (usec): 00:27:44.910 | 1.00th=[20841], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:27:44.910 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.910 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:27:44.910 | 99.00th=[33424], 99.50th=[42730], 99.90th=[60031], 99.95th=[60031], 00:27:44.910 | 99.99th=[60556] 00:27:44.910 bw ( KiB/s): min= 1792, max= 2296, per=4.17%, avg=2019.89, stdev=97.28, samples=19 00:27:44.910 iops : min= 448, max= 574, avg=504.89, stdev=24.30, samples=19 00:27:44.910 lat (msec) : 20=0.85%, 50=98.83%, 100=0.32% 00:27:44.910 cpu : usr=99.01%, sys=0.57%, ctx=26, majf=0, minf=58 00:27:44.910 IO depths : 1=5.9%, 2=12.0%, 4=24.3%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:44.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 issued rwts: total=5055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.910 filename0: (groupid=0, jobs=1): err= 0: pid=577398: Sat Apr 27 00:12:13 2024 00:27:44.910 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10002msec) 00:27:44.910 slat (usec): min=5, max=111, avg=28.15, stdev=18.61 00:27:44.910 clat (usec): min=19725, max=55871, avg=31676.25, stdev=1517.10 00:27:44.910 lat (usec): min=19731, max=55912, avg=31704.40, stdev=1515.75 00:27:44.910 clat percentiles (usec): 00:27:44.910 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.910 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:27:44.910 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32375], 95.00th=[32900], 00:27:44.910 | 99.00th=[33817], 99.50th=[33817], 99.90th=[55837], 99.95th=[55837], 00:27:44.910 | 99.99th=[55837] 00:27:44.910 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=2000.84, stdev=76.45, samples=19 00:27:44.910 iops : min= 448, max= 512, avg=500.21, stdev=19.11, samples=19 00:27:44.910 lat (msec) : 20=0.04%, 50=99.64%, 100=0.32% 00:27:44.910 cpu : usr=98.97%, sys=0.67%, ctx=36, majf=0, minf=47 00:27:44.910 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.910 filename0: (groupid=0, jobs=1): err= 0: pid=577399: Sat Apr 27 00:12:13 2024 00:27:44.910 read: IOPS=503, BW=2014KiB/s (2063kB/s)(19.7MiB/10005msec) 00:27:44.910 slat (nsec): min=5432, max=81934, avg=20481.72, stdev=13026.37 00:27:44.910 clat (usec): min=5635, max=47567, avg=31596.39, stdev=2061.76 00:27:44.910 lat (usec): min=5640, max=47587, avg=31616.87, stdev=2062.02 00:27:44.910 clat percentiles (usec): 00:27:44.910 | 1.00th=[24773], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.910 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.910 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:27:44.910 | 99.00th=[33817], 99.50th=[36963], 99.90th=[45351], 99.95th=[45876], 00:27:44.910 | 99.99th=[47449] 00:27:44.910 bw ( KiB/s): min= 1920, max= 2052, per=4.14%, avg=2001.37, stdev=63.19, samples=19 00:27:44.910 iops : min= 480, max= 513, avg=500.26, stdev=15.90, samples=19 00:27:44.910 lat (msec) : 10=0.28%, 20=0.52%, 50=99.21% 00:27:44.910 cpu : usr=98.27%, sys=0.93%, ctx=52, majf=0, minf=44 00:27:44.910 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:27:44.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 issued rwts: total=5038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.910 filename1: (groupid=0, jobs=1): err= 0: pid=577400: Sat Apr 27 00:12:13 2024 00:27:44.910 read: IOPS=503, BW=2015KiB/s (2063kB/s)(19.7MiB/10005msec) 00:27:44.910 slat (nsec): min=5520, max=84774, avg=21112.10, stdev=12796.84 00:27:44.910 clat (usec): min=5651, max=47732, avg=31569.68, stdev=2150.84 00:27:44.910 lat (usec): min=5657, max=47760, avg=31590.79, stdev=2151.13 00:27:44.910 clat percentiles (usec): 00:27:44.910 | 1.00th=[24773], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.910 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.910 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:27:44.910 | 99.00th=[33817], 99.50th=[36439], 99.90th=[47449], 99.95th=[47449], 00:27:44.910 | 99.99th=[47973] 00:27:44.910 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=2007.47, stdev=60.72, samples=19 00:27:44.910 iops : min= 480, max= 512, avg=501.79, stdev=15.22, samples=19 00:27:44.910 lat (msec) : 10=0.32%, 20=0.48%, 50=99.21% 00:27:44.910 cpu : usr=98.85%, sys=0.74%, ctx=78, majf=0, minf=46 00:27:44.910 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:44.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.910 filename1: (groupid=0, jobs=1): err= 0: pid=577401: Sat Apr 27 00:12:13 2024 00:27:44.910 read: IOPS=481, BW=1927KiB/s (1973kB/s)(18.8MiB/10005msec) 00:27:44.910 slat (usec): min=5, max=112, avg=20.12, stdev=15.64 00:27:44.910 clat (usec): min=9458, max=50995, avg=33066.54, stdev=4938.38 00:27:44.910 lat (usec): min=9464, max=51004, avg=33086.67, stdev=4937.58 00:27:44.910 clat percentiles (usec): 00:27:44.910 | 1.00th=[22414], 5.00th=[26346], 10.00th=[30802], 20.00th=[31327], 00:27:44.910 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:27:44.910 | 70.00th=[32375], 80.00th=[33424], 90.00th=[41157], 95.00th=[43254], 00:27:44.910 | 99.00th=[48497], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:27:44.910 | 99.99th=[51119] 00:27:44.910 bw ( KiB/s): min= 1536, max= 2059, per=3.96%, avg=1914.63, stdev=152.31, samples=19 00:27:44.910 iops : min= 384, max= 514, avg=478.58, stdev=38.02, samples=19 00:27:44.910 lat (msec) : 10=0.12%, 20=0.54%, 50=99.29%, 100=0.04% 00:27:44.910 cpu : usr=99.16%, sys=0.53%, ctx=16, majf=0, minf=71 00:27:44.910 IO depths : 1=1.9%, 2=3.9%, 4=13.0%, 8=68.3%, 16=12.8%, 32=0.0%, >=64=0.0% 00:27:44.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 complete : 0=0.0%, 4=91.8%, 8=4.7%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 issued rwts: total=4820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.910 filename1: (groupid=0, jobs=1): err= 0: pid=577402: Sat Apr 27 00:12:13 2024 00:27:44.910 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10003msec) 00:27:44.910 slat (usec): min=5, max=103, avg=23.07, stdev=17.37 00:27:44.910 clat (usec): min=20125, max=35956, avg=31663.67, stdev=887.66 00:27:44.910 lat (usec): min=20131, max=35975, avg=31686.74, stdev=885.88 00:27:44.910 clat percentiles (usec): 00:27:44.910 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.910 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.910 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32900], 00:27:44.910 | 99.00th=[33424], 99.50th=[33424], 99.90th=[35914], 99.95th=[35914], 00:27:44.910 | 99.99th=[35914] 00:27:44.910 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=2007.11, stdev=61.28, samples=19 00:27:44.910 iops : min= 479, max= 512, avg=501.74, stdev=15.30, samples=19 00:27:44.910 lat (msec) : 50=100.00% 00:27:44.910 cpu : usr=98.23%, sys=1.05%, ctx=134, majf=0, minf=37 00:27:44.910 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.910 filename1: (groupid=0, jobs=1): err= 0: pid=577403: Sat Apr 27 00:12:13 2024 00:27:44.910 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10005msec) 00:27:44.910 slat (usec): min=5, max=112, avg=18.32, stdev=14.48 00:27:44.910 clat (usec): min=4459, max=53732, avg=32066.17, stdev=3174.40 00:27:44.910 lat (usec): min=4466, max=53742, avg=32084.49, stdev=3174.43 00:27:44.910 clat percentiles (usec): 00:27:44.910 | 1.00th=[24773], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:27:44.910 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:27:44.910 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32900], 95.00th=[33817], 00:27:44.910 | 99.00th=[45876], 99.50th=[50594], 99.90th=[53740], 99.95th=[53740], 00:27:44.910 | 99.99th=[53740] 00:27:44.910 bw ( KiB/s): min= 1840, max= 2048, per=4.09%, avg=1979.68, stdev=56.66, samples=19 00:27:44.910 iops : min= 460, max= 512, avg=494.84, stdev=14.16, samples=19 00:27:44.910 lat (msec) : 10=0.32%, 20=0.40%, 50=98.71%, 100=0.56% 00:27:44.910 cpu : usr=99.22%, sys=0.48%, ctx=14, majf=0, minf=57 00:27:44.910 IO depths : 1=1.3%, 2=3.1%, 4=8.4%, 8=72.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:44.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 complete : 0=0.0%, 4=91.0%, 8=7.1%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.910 issued rwts: total=4978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.910 filename1: (groupid=0, jobs=1): err= 0: pid=577404: Sat Apr 27 00:12:13 2024 00:27:44.910 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10003msec) 00:27:44.910 slat (usec): min=5, max=111, avg=14.41, stdev=13.00 00:27:44.910 clat (usec): min=20319, max=36034, avg=31743.63, stdev=878.57 00:27:44.910 lat (usec): min=20331, max=36058, avg=31758.05, stdev=876.97 00:27:44.910 clat percentiles (usec): 00:27:44.910 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31327], 20.00th=[31327], 00:27:44.910 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:27:44.910 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:27:44.910 | 99.00th=[33424], 99.50th=[33424], 99.90th=[35914], 99.95th=[35914], 00:27:44.910 | 99.99th=[35914] 00:27:44.910 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=2007.11, stdev=61.28, samples=19 00:27:44.910 iops : min= 479, max= 512, avg=501.74, stdev=15.30, samples=19 00:27:44.910 lat (msec) : 50=100.00% 00:27:44.910 cpu : usr=98.15%, sys=1.00%, ctx=99, majf=0, minf=45 00:27:44.910 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.911 filename1: (groupid=0, jobs=1): err= 0: pid=577405: Sat Apr 27 00:12:13 2024 00:27:44.911 read: IOPS=503, BW=2014KiB/s (2062kB/s)(19.7MiB/10003msec) 00:27:44.911 slat (nsec): min=5515, max=87243, avg=12092.64, stdev=11310.25 00:27:44.911 clat (usec): min=13248, max=50595, avg=31684.92, stdev=3368.65 00:27:44.911 lat (usec): min=13279, max=50601, avg=31697.01, stdev=3367.94 00:27:44.911 clat percentiles (usec): 00:27:44.911 | 1.00th=[15401], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:27:44.911 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:27:44.911 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32900], 95.00th=[33162], 00:27:44.911 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50594], 99.95th=[50594], 00:27:44.911 | 99.99th=[50594] 00:27:44.911 bw ( KiB/s): min= 1920, max= 2104, per=4.16%, avg=2012.58, stdev=60.17, samples=19 00:27:44.911 iops : min= 480, max= 526, avg=503.11, stdev=15.02, samples=19 00:27:44.911 lat (msec) : 20=1.93%, 50=97.92%, 100=0.16% 00:27:44.911 cpu : usr=99.15%, sys=0.54%, ctx=14, majf=0, minf=53 00:27:44.911 IO depths : 1=5.2%, 2=11.1%, 4=23.9%, 8=52.4%, 16=7.4%, 32=0.0%, >=64=0.0% 00:27:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 issued rwts: total=5036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.911 filename1: (groupid=0, jobs=1): err= 0: pid=577406: Sat Apr 27 00:12:13 2024 00:27:44.911 read: IOPS=501, BW=2007KiB/s (2055kB/s)(19.6MiB/10015msec) 00:27:44.911 slat (usec): min=5, max=134, avg=28.25, stdev=19.66 00:27:44.911 clat (usec): min=17011, max=60090, avg=31633.86, stdev=1953.34 00:27:44.911 lat (usec): min=17022, max=60112, avg=31662.11, stdev=1952.09 00:27:44.911 clat percentiles (usec): 00:27:44.911 | 1.00th=[30016], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.911 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:27:44.911 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32375], 95.00th=[32900], 00:27:44.911 | 99.00th=[33424], 99.50th=[33424], 99.90th=[60031], 99.95th=[60031], 00:27:44.911 | 99.99th=[60031] 00:27:44.911 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=2006.79, stdev=74.12, samples=19 00:27:44.911 iops : min= 448, max= 512, avg=501.58, stdev=18.47, samples=19 00:27:44.911 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:27:44.911 cpu : usr=99.18%, sys=0.45%, ctx=41, majf=0, minf=42 00:27:44.911 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.911 filename1: (groupid=0, jobs=1): err= 0: pid=577407: Sat Apr 27 00:12:13 2024 00:27:44.911 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10005msec) 00:27:44.911 slat (usec): min=5, max=129, avg=26.63, stdev=18.52 00:27:44.911 clat (usec): min=30073, max=59371, avg=31709.29, stdev=1666.70 00:27:44.911 lat (usec): min=30082, max=59389, avg=31735.92, stdev=1664.99 00:27:44.911 clat percentiles (usec): 00:27:44.911 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.911 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:27:44.911 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:27:44.911 | 99.00th=[33424], 99.50th=[33817], 99.90th=[59507], 99.95th=[59507], 00:27:44.911 | 99.99th=[59507] 00:27:44.911 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=2000.21, stdev=75.50, samples=19 00:27:44.911 iops : min= 448, max= 512, avg=499.89, stdev=18.92, samples=19 00:27:44.911 lat (msec) : 50=99.68%, 100=0.32% 00:27:44.911 cpu : usr=98.99%, sys=0.60%, ctx=97, majf=0, minf=36 00:27:44.911 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.911 filename2: (groupid=0, jobs=1): err= 0: pid=577408: Sat Apr 27 00:12:13 2024 00:27:44.911 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10005msec) 00:27:44.911 slat (nsec): min=5535, max=85577, avg=19531.60, stdev=14327.99 00:27:44.911 clat (usec): min=14416, max=57297, avg=31699.07, stdev=1732.99 00:27:44.911 lat (usec): min=14425, max=57313, avg=31718.60, stdev=1732.52 00:27:44.911 clat percentiles (usec): 00:27:44.911 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.911 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:27:44.911 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:27:44.911 | 99.00th=[33817], 99.50th=[39584], 99.90th=[48497], 99.95th=[48497], 00:27:44.911 | 99.99th=[57410] 00:27:44.911 bw ( KiB/s): min= 1904, max= 2064, per=4.13%, avg=2000.74, stdev=65.03, samples=19 00:27:44.911 iops : min= 476, max= 516, avg=500.11, stdev=16.28, samples=19 00:27:44.911 lat (msec) : 20=0.56%, 50=99.40%, 100=0.04% 00:27:44.911 cpu : usr=98.28%, sys=0.94%, ctx=173, majf=0, minf=43 00:27:44.911 IO depths : 1=5.8%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:27:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.911 filename2: (groupid=0, jobs=1): err= 0: pid=577409: Sat Apr 27 00:12:13 2024 00:27:44.911 read: IOPS=512, BW=2049KiB/s (2098kB/s)(20.0MiB/10014msec) 00:27:44.911 slat (usec): min=5, max=113, avg=27.05, stdev=19.82 00:27:44.911 clat (usec): min=17447, max=59374, avg=30972.79, stdev=3652.62 00:27:44.911 lat (usec): min=17455, max=59391, avg=30999.84, stdev=3655.63 00:27:44.911 clat percentiles (usec): 00:27:44.911 | 1.00th=[20579], 5.00th=[22676], 10.00th=[26608], 20.00th=[31065], 00:27:44.911 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:27:44.911 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32637], 95.00th=[33424], 00:27:44.911 | 99.00th=[41681], 99.50th=[47449], 99.90th=[59507], 99.95th=[59507], 00:27:44.911 | 99.99th=[59507] 00:27:44.911 bw ( KiB/s): min= 1795, max= 2288, per=4.25%, avg=2054.11, stdev=114.83, samples=19 00:27:44.911 iops : min= 448, max= 572, avg=513.37, stdev=28.78, samples=19 00:27:44.911 lat (msec) : 20=0.88%, 50=98.81%, 100=0.31% 00:27:44.911 cpu : usr=98.97%, sys=0.64%, ctx=63, majf=0, minf=39 00:27:44.911 IO depths : 1=4.4%, 2=8.9%, 4=18.9%, 8=59.1%, 16=8.7%, 32=0.0%, >=64=0.0% 00:27:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 complete : 0=0.0%, 4=92.5%, 8=2.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 issued rwts: total=5130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.911 filename2: (groupid=0, jobs=1): err= 0: pid=577410: Sat Apr 27 00:12:13 2024 00:27:44.911 read: IOPS=501, BW=2007KiB/s (2055kB/s)(19.6MiB/10012msec) 00:27:44.911 slat (usec): min=5, max=114, avg=26.31, stdev=16.29 00:27:44.911 clat (usec): min=29006, max=35834, avg=31651.55, stdev=642.95 00:27:44.911 lat (usec): min=29027, max=35854, avg=31677.86, stdev=639.44 00:27:44.911 clat percentiles (usec): 00:27:44.911 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.911 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.911 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:27:44.911 | 99.00th=[33424], 99.50th=[33817], 99.90th=[35914], 99.95th=[35914], 00:27:44.911 | 99.99th=[35914] 00:27:44.911 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=2007.05, stdev=60.78, samples=19 00:27:44.911 iops : min= 480, max= 512, avg=501.68, stdev=15.15, samples=19 00:27:44.911 lat (msec) : 50=100.00% 00:27:44.911 cpu : usr=98.41%, sys=0.79%, ctx=120, majf=0, minf=46 00:27:44.911 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.911 filename2: (groupid=0, jobs=1): err= 0: pid=577411: Sat Apr 27 00:12:13 2024 00:27:44.911 read: IOPS=502, BW=2009KiB/s (2058kB/s)(19.6MiB/10001msec) 00:27:44.911 slat (nsec): min=5532, max=68736, avg=18194.63, stdev=10997.14 00:27:44.911 clat (usec): min=13302, max=48925, avg=31695.66, stdev=1922.14 00:27:44.911 lat (usec): min=13342, max=48987, avg=31713.85, stdev=1922.11 00:27:44.911 clat percentiles (usec): 00:27:44.911 | 1.00th=[24773], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.911 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.911 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:27:44.911 | 99.00th=[34341], 99.50th=[38011], 99.90th=[47449], 99.95th=[47973], 00:27:44.911 | 99.99th=[49021] 00:27:44.911 bw ( KiB/s): min= 1920, max= 2064, per=4.14%, avg=2001.21, stdev=63.62, samples=19 00:27:44.911 iops : min= 480, max= 516, avg=500.26, stdev=15.96, samples=19 00:27:44.911 lat (msec) : 20=0.72%, 50=99.28% 00:27:44.911 cpu : usr=99.00%, sys=0.68%, ctx=24, majf=0, minf=42 00:27:44.911 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:27:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.911 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.911 filename2: (groupid=0, jobs=1): err= 0: pid=577412: Sat Apr 27 00:12:13 2024 00:27:44.911 read: IOPS=501, BW=2007KiB/s (2055kB/s)(19.6MiB/10005msec) 00:27:44.911 slat (nsec): min=5417, max=70397, avg=17513.45, stdev=11415.49 00:27:44.911 clat (usec): min=6796, max=51444, avg=31741.90, stdev=3441.75 00:27:44.911 lat (usec): min=6801, max=51452, avg=31759.42, stdev=3441.99 00:27:44.911 clat percentiles (usec): 00:27:44.911 | 1.00th=[21627], 5.00th=[26608], 10.00th=[30540], 20.00th=[31327], 00:27:44.911 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.911 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32900], 95.00th=[36439], 00:27:44.911 | 99.00th=[45351], 99.50th=[48497], 99.90th=[50594], 99.95th=[51643], 00:27:44.911 | 99.99th=[51643] 00:27:44.912 bw ( KiB/s): min= 1888, max= 2112, per=4.13%, avg=1998.32, stdev=66.50, samples=19 00:27:44.912 iops : min= 472, max= 528, avg=499.58, stdev=16.62, samples=19 00:27:44.912 lat (msec) : 10=0.20%, 20=0.58%, 50=98.90%, 100=0.32% 00:27:44.912 cpu : usr=98.31%, sys=0.98%, ctx=51, majf=0, minf=41 00:27:44.912 IO depths : 1=3.9%, 2=8.2%, 4=18.6%, 8=59.7%, 16=9.6%, 32=0.0%, >=64=0.0% 00:27:44.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.912 complete : 0=0.0%, 4=92.5%, 8=2.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.912 issued rwts: total=5020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.912 filename2: (groupid=0, jobs=1): err= 0: pid=577413: Sat Apr 27 00:12:13 2024 00:27:44.912 read: IOPS=501, BW=2006KiB/s (2054kB/s)(19.6MiB/10018msec) 00:27:44.912 slat (nsec): min=5556, max=89388, avg=14511.82, stdev=10477.54 00:27:44.912 clat (usec): min=20031, max=42681, avg=31754.29, stdev=1087.75 00:27:44.912 lat (usec): min=20040, max=42690, avg=31768.80, stdev=1088.27 00:27:44.912 clat percentiles (usec): 00:27:44.912 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:27:44.912 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.912 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:27:44.912 | 99.00th=[33817], 99.50th=[34341], 99.90th=[42206], 99.95th=[42206], 00:27:44.912 | 99.99th=[42730] 00:27:44.912 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=2006.75, stdev=59.67, samples=20 00:27:44.912 iops : min= 479, max= 512, avg=501.65, stdev=14.89, samples=20 00:27:44.912 lat (msec) : 50=100.00% 00:27:44.912 cpu : usr=98.42%, sys=0.91%, ctx=59, majf=0, minf=46 00:27:44.912 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.912 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.912 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.912 filename2: (groupid=0, jobs=1): err= 0: pid=577414: Sat Apr 27 00:12:13 2024 00:27:44.912 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10007msec) 00:27:44.912 slat (nsec): min=5603, max=86977, avg=23654.64, stdev=13283.12 00:27:44.912 clat (usec): min=29257, max=63664, avg=31752.11, stdev=1758.01 00:27:44.912 lat (usec): min=29263, max=63686, avg=31775.76, stdev=1757.04 00:27:44.912 clat percentiles (usec): 00:27:44.912 | 1.00th=[30540], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.912 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:27:44.912 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32375], 95.00th=[32900], 00:27:44.912 | 99.00th=[33424], 99.50th=[33817], 99.90th=[61080], 99.95th=[61080], 00:27:44.912 | 99.99th=[63701] 00:27:44.912 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=2000.47, stdev=75.67, samples=19 00:27:44.912 iops : min= 448, max= 512, avg=500.00, stdev=18.99, samples=19 00:27:44.912 lat (msec) : 50=99.68%, 100=0.32% 00:27:44.912 cpu : usr=98.65%, sys=0.81%, ctx=47, majf=0, minf=50 00:27:44.912 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.912 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.912 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.912 filename2: (groupid=0, jobs=1): err= 0: pid=577415: Sat Apr 27 00:12:13 2024 00:27:44.912 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10003msec) 00:27:44.912 slat (nsec): min=5512, max=83178, avg=13230.78, stdev=10958.70 00:27:44.912 clat (usec): min=20357, max=41550, avg=31745.48, stdev=960.21 00:27:44.912 lat (usec): min=20369, max=41563, avg=31758.71, stdev=960.85 00:27:44.912 clat percentiles (usec): 00:27:44.912 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:27:44.912 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:27:44.912 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:27:44.912 | 99.00th=[33424], 99.50th=[33817], 99.90th=[36963], 99.95th=[41157], 00:27:44.912 | 99.99th=[41681] 00:27:44.912 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=2007.11, stdev=61.28, samples=19 00:27:44.912 iops : min= 479, max= 512, avg=501.74, stdev=15.30, samples=19 00:27:44.912 lat (msec) : 50=100.00% 00:27:44.912 cpu : usr=99.03%, sys=0.59%, ctx=150, majf=0, minf=40 00:27:44.912 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.912 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.912 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.912 00:27:44.912 Run status group 0 (all jobs): 00:27:44.912 READ: bw=47.2MiB/s (49.5MB/s), 1927KiB/s-2241KiB/s (1973kB/s-2295kB/s), io=473MiB (496MB), run=10001-10020msec 00:27:44.912 00:12:13 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:44.912 00:12:13 -- target/dif.sh@43 -- # local sub 00:27:44.912 00:12:13 -- target/dif.sh@45 -- # for sub in "$@" 00:27:44.912 00:12:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:44.912 00:12:13 -- target/dif.sh@36 -- # local sub_id=0 00:27:44.912 00:12:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@45 -- # for sub in "$@" 00:27:44.912 00:12:13 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:44.912 00:12:13 -- target/dif.sh@36 -- # local sub_id=1 00:27:44.912 00:12:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@45 -- # for sub in "$@" 00:27:44.912 00:12:13 -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:44.912 00:12:13 -- target/dif.sh@36 -- # local sub_id=2 00:27:44.912 00:12:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@115 -- # NULL_DIF=1 00:27:44.912 00:12:13 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:44.912 00:12:13 -- target/dif.sh@115 -- # numjobs=2 00:27:44.912 00:12:13 -- target/dif.sh@115 -- # iodepth=8 00:27:44.912 00:12:13 -- target/dif.sh@115 -- # runtime=5 00:27:44.912 00:12:13 -- target/dif.sh@115 -- # files=1 00:27:44.912 00:12:13 -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:44.912 00:12:13 -- target/dif.sh@28 -- # local sub 00:27:44.912 00:12:13 -- target/dif.sh@30 -- # for sub in "$@" 00:27:44.912 00:12:13 -- target/dif.sh@31 -- # create_subsystem 0 00:27:44.912 00:12:13 -- target/dif.sh@18 -- # local sub_id=0 00:27:44.912 00:12:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 bdev_null0 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 [2024-04-27 00:12:13.622029] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@30 -- # for sub in "$@" 00:27:44.912 00:12:13 -- target/dif.sh@31 -- # create_subsystem 1 00:27:44.912 00:12:13 -- target/dif.sh@18 -- # local sub_id=1 00:27:44.912 00:12:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 bdev_null1 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:44.912 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.912 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.912 00:12:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:44.913 00:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.913 00:12:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.913 00:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.913 00:12:13 -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:44.913 00:12:13 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:44.913 00:12:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:44.913 00:12:13 -- nvmf/common.sh@521 -- # config=() 00:27:44.913 00:12:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.913 00:12:13 -- nvmf/common.sh@521 -- # local subsystem config 00:27:44.913 00:12:13 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.913 00:12:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:44.913 00:12:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:44.913 { 00:27:44.913 "params": { 00:27:44.913 "name": "Nvme$subsystem", 00:27:44.913 "trtype": "$TEST_TRANSPORT", 00:27:44.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.913 "adrfam": "ipv4", 00:27:44.913 "trsvcid": "$NVMF_PORT", 00:27:44.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.913 "hdgst": ${hdgst:-false}, 00:27:44.913 "ddgst": ${ddgst:-false} 00:27:44.913 }, 00:27:44.913 "method": "bdev_nvme_attach_controller" 00:27:44.913 } 00:27:44.913 EOF 00:27:44.913 )") 00:27:44.913 00:12:13 -- target/dif.sh@82 -- # gen_fio_conf 00:27:44.913 00:12:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:44.913 00:12:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:44.913 00:12:13 -- target/dif.sh@54 -- # local file 00:27:44.913 00:12:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:44.913 00:12:13 -- target/dif.sh@56 -- # cat 00:27:44.913 00:12:13 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:44.913 00:12:13 -- common/autotest_common.sh@1327 -- # shift 00:27:44.913 00:12:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:44.913 00:12:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:44.913 00:12:13 -- nvmf/common.sh@543 -- # cat 00:27:44.913 00:12:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:44.913 00:12:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:44.913 00:12:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:44.913 00:12:13 -- target/dif.sh@72 -- # (( file <= files )) 00:27:44.913 00:12:13 -- target/dif.sh@73 -- # cat 00:27:44.913 00:12:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:44.913 00:12:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:44.913 00:12:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:44.913 { 00:27:44.913 "params": { 00:27:44.913 "name": "Nvme$subsystem", 00:27:44.913 "trtype": "$TEST_TRANSPORT", 00:27:44.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.913 "adrfam": "ipv4", 00:27:44.913 "trsvcid": "$NVMF_PORT", 00:27:44.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.913 "hdgst": ${hdgst:-false}, 00:27:44.913 "ddgst": ${ddgst:-false} 00:27:44.913 }, 00:27:44.913 "method": "bdev_nvme_attach_controller" 00:27:44.913 } 00:27:44.913 EOF 00:27:44.913 )") 00:27:44.913 00:12:13 -- target/dif.sh@72 -- # (( file++ )) 00:27:44.913 00:12:13 -- target/dif.sh@72 -- # (( file <= files )) 00:27:44.913 00:12:13 -- nvmf/common.sh@543 -- # cat 00:27:44.913 00:12:13 -- nvmf/common.sh@545 -- # jq . 00:27:44.913 00:12:13 -- nvmf/common.sh@546 -- # IFS=, 00:27:44.913 00:12:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:44.913 "params": { 00:27:44.913 "name": "Nvme0", 00:27:44.913 "trtype": "tcp", 00:27:44.913 "traddr": "10.0.0.2", 00:27:44.913 "adrfam": "ipv4", 00:27:44.913 "trsvcid": "4420", 00:27:44.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:44.913 "hdgst": false, 00:27:44.913 "ddgst": false 00:27:44.913 }, 00:27:44.913 "method": "bdev_nvme_attach_controller" 00:27:44.913 },{ 00:27:44.913 "params": { 00:27:44.913 "name": "Nvme1", 00:27:44.913 "trtype": "tcp", 00:27:44.913 "traddr": "10.0.0.2", 00:27:44.913 "adrfam": "ipv4", 00:27:44.913 "trsvcid": "4420", 00:27:44.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:44.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:44.913 "hdgst": false, 00:27:44.913 "ddgst": false 00:27:44.913 }, 00:27:44.913 "method": "bdev_nvme_attach_controller" 00:27:44.913 }' 00:27:44.913 00:12:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:44.913 00:12:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:44.913 00:12:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:44.913 00:12:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:44.913 00:12:13 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:44.913 00:12:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:44.913 00:12:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:44.913 00:12:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:44.913 00:12:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:44.913 00:12:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.913 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:44.913 ... 00:27:44.913 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:44.913 ... 00:27:44.913 fio-3.35 00:27:44.913 Starting 4 threads 00:27:44.913 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.207 00:27:50.207 filename0: (groupid=0, jobs=1): err= 0: pid=580091: Sat Apr 27 00:12:19 2024 00:27:50.207 read: IOPS=2187, BW=17.1MiB/s (17.9MB/s)(85.5MiB/5002msec) 00:27:50.207 slat (nsec): min=5312, max=78785, avg=6643.43, stdev=2975.76 00:27:50.207 clat (usec): min=2021, max=6395, avg=3639.38, stdev=616.09 00:27:50.207 lat (usec): min=2027, max=6401, avg=3646.02, stdev=615.97 00:27:50.207 clat percentiles (usec): 00:27:50.207 | 1.00th=[ 2540], 5.00th=[ 2868], 10.00th=[ 3097], 20.00th=[ 3261], 00:27:50.208 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3589], 00:27:50.208 | 70.00th=[ 3654], 80.00th=[ 3785], 90.00th=[ 4686], 95.00th=[ 5145], 00:27:50.208 | 99.00th=[ 5407], 99.50th=[ 5669], 99.90th=[ 5735], 99.95th=[ 6063], 00:27:50.208 | 99.99th=[ 6390] 00:27:50.208 bw ( KiB/s): min=17024, max=18032, per=25.74%, avg=17571.56, stdev=410.33, samples=9 00:27:50.208 iops : min= 2128, max= 2254, avg=2196.44, stdev=51.29, samples=9 00:27:50.208 lat (msec) : 4=83.66%, 10=16.34% 00:27:50.208 cpu : usr=98.10%, sys=1.66%, ctx=10, majf=0, minf=54 00:27:50.208 IO depths : 1=0.1%, 2=0.4%, 4=70.2%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.208 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.208 issued rwts: total=10942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.208 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:50.208 filename0: (groupid=0, jobs=1): err= 0: pid=580092: Sat Apr 27 00:12:19 2024 00:27:50.208 read: IOPS=2067, BW=16.2MiB/s (16.9MB/s)(80.8MiB/5001msec) 00:27:50.208 slat (nsec): min=5310, max=73251, avg=6399.41, stdev=2766.37 00:27:50.208 clat (usec): min=1682, max=6725, avg=3850.99, stdev=692.84 00:27:50.208 lat (usec): min=1688, max=6730, avg=3857.39, stdev=692.77 00:27:50.208 clat percentiles (usec): 00:27:50.208 | 1.00th=[ 3032], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3392], 00:27:50.208 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3621], 60.00th=[ 3687], 00:27:50.208 | 70.00th=[ 3752], 80.00th=[ 4178], 90.00th=[ 5145], 95.00th=[ 5342], 00:27:50.208 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6259], 99.95th=[ 6456], 00:27:50.208 | 99.99th=[ 6718] 00:27:50.208 bw ( KiB/s): min=15840, max=17040, per=24.19%, avg=16517.33, stdev=332.94, samples=9 00:27:50.208 iops : min= 1980, max= 2130, avg=2064.67, stdev=41.62, samples=9 00:27:50.208 lat (msec) : 2=0.03%, 4=77.14%, 10=22.83% 00:27:50.208 cpu : usr=97.72%, sys=2.06%, ctx=10, majf=0, minf=52 00:27:50.208 IO depths : 1=0.1%, 2=0.3%, 4=71.8%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.208 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.208 issued rwts: total=10342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.208 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:50.208 filename1: (groupid=0, jobs=1): err= 0: pid=580093: Sat Apr 27 00:12:19 2024 00:27:50.208 read: IOPS=2171, BW=17.0MiB/s (17.8MB/s)(84.9MiB/5002msec) 00:27:50.208 slat (nsec): min=5405, max=44560, avg=8265.87, stdev=3211.15 00:27:50.208 clat (usec): min=1266, max=6381, avg=3660.88, stdev=583.06 00:27:50.208 lat (usec): min=1284, max=6389, avg=3669.15, stdev=582.93 00:27:50.208 clat percentiles (usec): 00:27:50.208 | 1.00th=[ 2671], 5.00th=[ 3064], 10.00th=[ 3163], 20.00th=[ 3294], 00:27:50.208 | 30.00th=[ 3392], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3621], 00:27:50.208 | 70.00th=[ 3687], 80.00th=[ 3785], 90.00th=[ 4490], 95.00th=[ 5145], 00:27:50.208 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 5866], 99.95th=[ 5932], 00:27:50.208 | 99.99th=[ 6390] 00:27:50.208 bw ( KiB/s): min=16816, max=17728, per=25.32%, avg=17290.67, stdev=305.99, samples=9 00:27:50.208 iops : min= 2102, max= 2216, avg=2161.33, stdev=38.25, samples=9 00:27:50.208 lat (msec) : 2=0.29%, 4=85.20%, 10=14.51% 00:27:50.208 cpu : usr=96.74%, sys=3.00%, ctx=9, majf=0, minf=29 00:27:50.208 IO depths : 1=0.2%, 2=0.5%, 4=72.3%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.208 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.208 issued rwts: total=10863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.208 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:50.208 filename1: (groupid=0, jobs=1): err= 0: pid=580094: Sat Apr 27 00:12:19 2024 00:27:50.208 read: IOPS=2107, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5001msec) 00:27:50.208 slat (nsec): min=5308, max=52060, avg=6591.95, stdev=2722.89 00:27:50.208 clat (usec): min=2075, max=6614, avg=3777.03, stdev=674.03 00:27:50.208 lat (usec): min=2080, max=6639, avg=3783.62, stdev=673.96 00:27:50.208 clat percentiles (usec): 00:27:50.208 | 1.00th=[ 2868], 5.00th=[ 3130], 10.00th=[ 3195], 20.00th=[ 3359], 00:27:50.208 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3589], 60.00th=[ 3654], 00:27:50.208 | 70.00th=[ 3720], 80.00th=[ 3949], 90.00th=[ 5145], 95.00th=[ 5342], 00:27:50.208 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6325], 99.95th=[ 6325], 00:27:50.208 | 99.99th=[ 6521] 00:27:50.208 bw ( KiB/s): min=16640, max=17120, per=24.72%, avg=16876.56, stdev=148.43, samples=9 00:27:50.208 iops : min= 2080, max= 2140, avg=2109.56, stdev=18.57, samples=9 00:27:50.208 lat (msec) : 4=81.42%, 10=18.58% 00:27:50.208 cpu : usr=97.80%, sys=1.92%, ctx=62, majf=0, minf=41 00:27:50.208 IO depths : 1=0.1%, 2=0.3%, 4=72.3%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.208 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.208 issued rwts: total=10541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.208 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:50.208 00:27:50.208 Run status group 0 (all jobs): 00:27:50.208 READ: bw=66.7MiB/s (69.9MB/s), 16.2MiB/s-17.1MiB/s (16.9MB/s-17.9MB/s), io=334MiB (350MB), run=5001-5002msec 00:27:50.208 00:12:19 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:50.208 00:12:19 -- target/dif.sh@43 -- # local sub 00:27:50.208 00:12:19 -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.208 00:12:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:50.208 00:12:19 -- target/dif.sh@36 -- # local sub_id=0 00:27:50.208 00:12:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:50.208 00:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.208 00:12:19 -- common/autotest_common.sh@10 -- # set +x 00:27:50.208 00:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.208 00:12:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:50.208 00:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.208 00:12:19 -- common/autotest_common.sh@10 -- # set +x 00:27:50.208 00:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.208 00:12:19 -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.208 00:12:19 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:50.208 00:12:19 -- target/dif.sh@36 -- # local sub_id=1 00:27:50.208 00:12:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.208 00:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.208 00:12:19 -- common/autotest_common.sh@10 -- # set +x 00:27:50.208 00:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.208 00:12:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:50.208 00:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.208 00:12:19 -- common/autotest_common.sh@10 -- # set +x 00:27:50.208 00:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.208 00:27:50.208 real 0m24.410s 00:27:50.208 user 5m12.531s 00:27:50.208 sys 0m3.736s 00:27:50.208 00:12:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:50.208 00:12:20 -- common/autotest_common.sh@10 -- # set +x 00:27:50.208 ************************************ 00:27:50.208 END TEST fio_dif_rand_params 00:27:50.208 ************************************ 00:27:50.208 00:12:20 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:50.208 00:12:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:50.208 00:12:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:50.208 00:12:20 -- common/autotest_common.sh@10 -- # set +x 00:27:50.208 ************************************ 00:27:50.208 START TEST fio_dif_digest 00:27:50.208 ************************************ 00:27:50.208 00:12:20 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:27:50.208 00:12:20 -- target/dif.sh@123 -- # local NULL_DIF 00:27:50.208 00:12:20 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:50.208 00:12:20 -- target/dif.sh@125 -- # local hdgst ddgst 00:27:50.208 00:12:20 -- target/dif.sh@127 -- # NULL_DIF=3 00:27:50.208 00:12:20 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:50.208 00:12:20 -- target/dif.sh@127 -- # numjobs=3 00:27:50.208 00:12:20 -- target/dif.sh@127 -- # iodepth=3 00:27:50.208 00:12:20 -- target/dif.sh@127 -- # runtime=10 00:27:50.208 00:12:20 -- target/dif.sh@128 -- # hdgst=true 00:27:50.208 00:12:20 -- target/dif.sh@128 -- # ddgst=true 00:27:50.208 00:12:20 -- target/dif.sh@130 -- # create_subsystems 0 00:27:50.208 00:12:20 -- target/dif.sh@28 -- # local sub 00:27:50.208 00:12:20 -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.208 00:12:20 -- target/dif.sh@31 -- # create_subsystem 0 00:27:50.208 00:12:20 -- target/dif.sh@18 -- # local sub_id=0 00:27:50.208 00:12:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:50.208 00:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.208 00:12:20 -- common/autotest_common.sh@10 -- # set +x 00:27:50.208 bdev_null0 00:27:50.208 00:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.208 00:12:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:50.208 00:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.208 00:12:20 -- common/autotest_common.sh@10 -- # set +x 00:27:50.208 00:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.208 00:12:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:50.208 00:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.208 00:12:20 -- common/autotest_common.sh@10 -- # set +x 00:27:50.208 00:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.208 00:12:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:50.208 00:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.208 00:12:20 -- common/autotest_common.sh@10 -- # set +x 00:27:50.208 [2024-04-27 00:12:20.251656] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.208 00:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.208 00:12:20 -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:50.208 00:12:20 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:50.208 00:12:20 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:50.208 00:12:20 -- nvmf/common.sh@521 -- # config=() 00:27:50.208 00:12:20 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.208 00:12:20 -- nvmf/common.sh@521 -- # local subsystem config 00:27:50.208 00:12:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:50.209 00:12:20 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.209 00:12:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:50.209 { 00:27:50.209 "params": { 00:27:50.209 "name": "Nvme$subsystem", 00:27:50.209 "trtype": "$TEST_TRANSPORT", 00:27:50.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.209 "adrfam": "ipv4", 00:27:50.209 "trsvcid": "$NVMF_PORT", 00:27:50.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.209 "hdgst": ${hdgst:-false}, 00:27:50.209 "ddgst": ${ddgst:-false} 00:27:50.209 }, 00:27:50.209 "method": "bdev_nvme_attach_controller" 00:27:50.209 } 00:27:50.209 EOF 00:27:50.209 )") 00:27:50.209 00:12:20 -- target/dif.sh@82 -- # gen_fio_conf 00:27:50.209 00:12:20 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:50.209 00:12:20 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:50.209 00:12:20 -- target/dif.sh@54 -- # local file 00:27:50.209 00:12:20 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:50.209 00:12:20 -- target/dif.sh@56 -- # cat 00:27:50.209 00:12:20 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.209 00:12:20 -- common/autotest_common.sh@1327 -- # shift 00:27:50.209 00:12:20 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:50.209 00:12:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.209 00:12:20 -- nvmf/common.sh@543 -- # cat 00:27:50.209 00:12:20 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.209 00:12:20 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:50.209 00:12:20 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:50.209 00:12:20 -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.209 00:12:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:50.209 00:12:20 -- nvmf/common.sh@545 -- # jq . 00:27:50.209 00:12:20 -- nvmf/common.sh@546 -- # IFS=, 00:27:50.209 00:12:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:50.209 "params": { 00:27:50.209 "name": "Nvme0", 00:27:50.209 "trtype": "tcp", 00:27:50.209 "traddr": "10.0.0.2", 00:27:50.209 "adrfam": "ipv4", 00:27:50.209 "trsvcid": "4420", 00:27:50.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:50.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:50.209 "hdgst": true, 00:27:50.209 "ddgst": true 00:27:50.209 }, 00:27:50.209 "method": "bdev_nvme_attach_controller" 00:27:50.209 }' 00:27:50.209 00:12:20 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:50.209 00:12:20 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:50.209 00:12:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.209 00:12:20 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.209 00:12:20 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:50.209 00:12:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:50.209 00:12:20 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:50.209 00:12:20 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:50.209 00:12:20 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:50.209 00:12:20 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.470 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:50.470 ... 00:27:50.470 fio-3.35 00:27:50.470 Starting 3 threads 00:27:50.731 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.968 00:28:02.968 filename0: (groupid=0, jobs=1): err= 0: pid=581590: Sat Apr 27 00:12:31 2024 00:28:02.968 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(278MiB/10047msec) 00:28:02.968 slat (nsec): min=5568, max=31062, avg=6380.44, stdev=781.22 00:28:02.968 clat (usec): min=7614, max=58165, avg=13531.05, stdev=2753.88 00:28:02.968 lat (usec): min=7620, max=58171, avg=13537.43, stdev=2754.10 00:28:02.968 clat percentiles (usec): 00:28:02.968 | 1.00th=[ 9372], 5.00th=[10945], 10.00th=[11863], 20.00th=[12518], 00:28:02.968 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:28:02.968 | 70.00th=[14091], 80.00th=[14484], 90.00th=[14877], 95.00th=[15270], 00:28:02.968 | 99.00th=[16450], 99.50th=[17171], 99.90th=[55313], 99.95th=[56886], 00:28:02.968 | 99.99th=[57934] 00:28:02.968 bw ( KiB/s): min=25344, max=29696, per=34.77%, avg=28426.10, stdev=981.95, samples=20 00:28:02.968 iops : min= 198, max= 232, avg=222.05, stdev= 7.72, samples=20 00:28:02.968 lat (msec) : 10=2.65%, 20=96.99%, 100=0.36% 00:28:02.968 cpu : usr=95.95%, sys=3.83%, ctx=32, majf=0, minf=105 00:28:02.968 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.968 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.968 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:02.968 filename0: (groupid=0, jobs=1): err= 0: pid=581591: Sat Apr 27 00:12:31 2024 00:28:02.968 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(265MiB/10047msec) 00:28:02.968 slat (nsec): min=5623, max=30398, avg=6367.40, stdev=934.48 00:28:02.968 clat (usec): min=6393, max=56906, avg=14168.05, stdev=4199.84 00:28:02.968 lat (usec): min=6406, max=56912, avg=14174.42, stdev=4199.77 00:28:02.968 clat percentiles (usec): 00:28:02.968 | 1.00th=[ 9634], 5.00th=[11731], 10.00th=[12387], 20.00th=[12911], 00:28:02.968 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14222], 00:28:02.968 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:28:02.968 | 99.00th=[17433], 99.50th=[55837], 99.90th=[56361], 99.95th=[56886], 00:28:02.968 | 99.99th=[56886] 00:28:02.968 bw ( KiB/s): min=24832, max=29184, per=33.21%, avg=27148.80, stdev=1368.49, samples=20 00:28:02.968 iops : min= 194, max= 228, avg=212.10, stdev=10.69, samples=20 00:28:02.968 lat (msec) : 10=1.70%, 20=97.36%, 50=0.05%, 100=0.89% 00:28:02.968 cpu : usr=96.00%, sys=3.79%, ctx=15, majf=0, minf=168 00:28:02.968 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.968 issued rwts: total=2123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.968 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:02.968 filename0: (groupid=0, jobs=1): err= 0: pid=581592: Sat Apr 27 00:12:31 2024 00:28:02.968 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(259MiB/10046msec) 00:28:02.968 slat (nsec): min=8089, max=30528, avg=8830.15, stdev=661.47 00:28:02.968 clat (usec): min=8668, max=57287, avg=14520.09, stdev=3875.60 00:28:02.968 lat (usec): min=8677, max=57295, avg=14528.92, stdev=3875.75 00:28:02.968 clat percentiles (usec): 00:28:02.968 | 1.00th=[10159], 5.00th=[11994], 10.00th=[12649], 20.00th=[13304], 00:28:02.968 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14615], 00:28:02.968 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16319], 00:28:02.968 | 99.00th=[18220], 99.50th=[54264], 99.90th=[56361], 99.95th=[56886], 00:28:02.968 | 99.99th=[57410] 00:28:02.968 bw ( KiB/s): min=24064, max=28160, per=32.39%, avg=26483.20, stdev=1244.41, samples=20 00:28:02.968 iops : min= 188, max= 220, avg=206.90, stdev= 9.72, samples=20 00:28:02.968 lat (msec) : 10=0.82%, 20=98.36%, 50=0.05%, 100=0.77% 00:28:02.968 cpu : usr=95.75%, sys=4.00%, ctx=14, majf=0, minf=83 00:28:02.968 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.968 issued rwts: total=2071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.968 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:02.968 00:28:02.968 Run status group 0 (all jobs): 00:28:02.968 READ: bw=79.8MiB/s (83.7MB/s), 25.8MiB/s-27.7MiB/s (27.0MB/s-29.0MB/s), io=802MiB (841MB), run=10046-10047msec 00:28:02.968 00:12:31 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:02.968 00:12:31 -- target/dif.sh@43 -- # local sub 00:28:02.968 00:12:31 -- target/dif.sh@45 -- # for sub in "$@" 00:28:02.968 00:12:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:02.968 00:12:31 -- target/dif.sh@36 -- # local sub_id=0 00:28:02.968 00:12:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:02.968 00:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.968 00:12:31 -- common/autotest_common.sh@10 -- # set +x 00:28:02.968 00:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.968 00:12:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:02.968 00:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.968 00:12:31 -- common/autotest_common.sh@10 -- # set +x 00:28:02.968 00:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.968 00:28:02.968 real 0m11.252s 00:28:02.968 user 0m45.080s 00:28:02.968 sys 0m1.479s 00:28:02.968 00:12:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:02.968 00:12:31 -- common/autotest_common.sh@10 -- # set +x 00:28:02.968 ************************************ 00:28:02.968 END TEST fio_dif_digest 00:28:02.968 ************************************ 00:28:02.968 00:12:31 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:02.968 00:12:31 -- target/dif.sh@147 -- # nvmftestfini 00:28:02.968 00:12:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:02.968 00:12:31 -- nvmf/common.sh@117 -- # sync 00:28:02.968 00:12:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:02.968 00:12:31 -- nvmf/common.sh@120 -- # set +e 00:28:02.968 00:12:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:02.968 00:12:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:02.968 rmmod nvme_tcp 00:28:02.968 rmmod nvme_fabrics 00:28:02.968 rmmod nvme_keyring 00:28:02.968 00:12:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:02.968 00:12:31 -- nvmf/common.sh@124 -- # set -e 00:28:02.968 00:12:31 -- nvmf/common.sh@125 -- # return 0 00:28:02.968 00:12:31 -- nvmf/common.sh@478 -- # '[' -n 570508 ']' 00:28:02.968 00:12:31 -- nvmf/common.sh@479 -- # killprocess 570508 00:28:02.968 00:12:31 -- common/autotest_common.sh@936 -- # '[' -z 570508 ']' 00:28:02.968 00:12:31 -- common/autotest_common.sh@940 -- # kill -0 570508 00:28:02.968 00:12:31 -- common/autotest_common.sh@941 -- # uname 00:28:02.968 00:12:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:02.968 00:12:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 570508 00:28:02.968 00:12:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:02.968 00:12:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:02.968 00:12:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 570508' 00:28:02.968 killing process with pid 570508 00:28:02.968 00:12:31 -- common/autotest_common.sh@955 -- # kill 570508 00:28:02.968 00:12:31 -- common/autotest_common.sh@960 -- # wait 570508 00:28:02.968 00:12:31 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:02.968 00:12:31 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:04.354 Waiting for block devices as requested 00:28:04.615 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:04.615 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:04.615 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:04.876 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:04.876 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:04.876 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:04.876 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:05.137 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:05.137 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:05.397 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:05.397 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:05.397 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:05.657 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:05.657 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:05.657 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:05.657 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:05.919 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:06.180 00:12:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:06.180 00:12:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:06.180 00:12:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.180 00:12:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:06.180 00:12:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.180 00:12:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:06.180 00:12:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.094 00:12:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.094 00:28:08.094 real 1m17.588s 00:28:08.094 user 7m57.505s 00:28:08.094 sys 0m19.424s 00:28:08.094 00:12:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:08.094 00:12:38 -- common/autotest_common.sh@10 -- # set +x 00:28:08.094 ************************************ 00:28:08.094 END TEST nvmf_dif 00:28:08.094 ************************************ 00:28:08.094 00:12:38 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:08.094 00:12:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:08.094 00:12:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:08.094 00:12:38 -- common/autotest_common.sh@10 -- # set +x 00:28:08.355 ************************************ 00:28:08.356 START TEST nvmf_abort_qd_sizes 00:28:08.356 ************************************ 00:28:08.356 00:12:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:08.356 * Looking for test storage... 00:28:08.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:08.356 00:12:38 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.356 00:12:38 -- nvmf/common.sh@7 -- # uname -s 00:28:08.356 00:12:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.356 00:12:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.356 00:12:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.356 00:12:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.356 00:12:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.356 00:12:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.356 00:12:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.356 00:12:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.356 00:12:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.356 00:12:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.356 00:12:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:08.356 00:12:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:08.356 00:12:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.356 00:12:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.356 00:12:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.356 00:12:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.356 00:12:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.356 00:12:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.356 00:12:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.356 00:12:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.356 00:12:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.356 00:12:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.356 00:12:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.356 00:12:38 -- paths/export.sh@5 -- # export PATH 00:28:08.356 00:12:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.356 00:12:38 -- nvmf/common.sh@47 -- # : 0 00:28:08.356 00:12:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.356 00:12:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.356 00:12:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.356 00:12:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.356 00:12:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.356 00:12:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.356 00:12:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.356 00:12:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.356 00:12:38 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:08.356 00:12:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:08.356 00:12:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.356 00:12:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:08.356 00:12:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:08.356 00:12:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:08.356 00:12:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.356 00:12:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:08.356 00:12:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.356 00:12:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:08.356 00:12:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:08.356 00:12:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.356 00:12:38 -- common/autotest_common.sh@10 -- # set +x 00:28:16.490 00:12:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:16.490 00:12:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:16.490 00:12:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:16.490 00:12:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:16.490 00:12:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:16.490 00:12:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:16.490 00:12:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:16.490 00:12:45 -- nvmf/common.sh@295 -- # net_devs=() 00:28:16.490 00:12:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:16.490 00:12:45 -- nvmf/common.sh@296 -- # e810=() 00:28:16.490 00:12:45 -- nvmf/common.sh@296 -- # local -ga e810 00:28:16.490 00:12:45 -- nvmf/common.sh@297 -- # x722=() 00:28:16.490 00:12:45 -- nvmf/common.sh@297 -- # local -ga x722 00:28:16.490 00:12:45 -- nvmf/common.sh@298 -- # mlx=() 00:28:16.490 00:12:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:16.490 00:12:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.490 00:12:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.490 00:12:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.490 00:12:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.490 00:12:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.490 00:12:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.490 00:12:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.490 00:12:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.490 00:12:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.490 00:12:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.490 00:12:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.490 00:12:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:16.490 00:12:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:16.490 00:12:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:16.490 00:12:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.490 00:12:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:16.490 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:16.490 00:12:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.490 00:12:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:16.490 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:16.490 00:12:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:16.490 00:12:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.490 00:12:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.490 00:12:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:16.490 00:12:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.490 00:12:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:16.490 Found net devices under 0000:31:00.0: cvl_0_0 00:28:16.490 00:12:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.490 00:12:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.490 00:12:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.490 00:12:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:16.490 00:12:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.490 00:12:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:16.490 Found net devices under 0000:31:00.1: cvl_0_1 00:28:16.490 00:12:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.490 00:12:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:16.490 00:12:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:16.490 00:12:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:16.490 00:12:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:16.490 00:12:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.490 00:12:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.490 00:12:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.490 00:12:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:16.490 00:12:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.490 00:12:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.491 00:12:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:16.491 00:12:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.491 00:12:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.491 00:12:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:16.491 00:12:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:16.491 00:12:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.491 00:12:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.491 00:12:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.491 00:12:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.491 00:12:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:16.491 00:12:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.491 00:12:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.491 00:12:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.491 00:12:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:16.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:28:16.491 00:28:16.491 --- 10.0.0.2 ping statistics --- 00:28:16.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.491 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:28:16.491 00:12:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:28:16.491 00:28:16.491 --- 10.0.0.1 ping statistics --- 00:28:16.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.491 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:28:16.491 00:12:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.491 00:12:45 -- nvmf/common.sh@411 -- # return 0 00:28:16.491 00:12:45 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:16.491 00:12:45 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:18.404 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:18.404 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:18.404 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:18.666 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:19.237 00:12:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.237 00:12:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:19.237 00:12:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:19.237 00:12:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.237 00:12:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:19.237 00:12:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:19.237 00:12:49 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:19.237 00:12:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:19.237 00:12:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:19.237 00:12:49 -- common/autotest_common.sh@10 -- # set +x 00:28:19.237 00:12:49 -- nvmf/common.sh@470 -- # nvmfpid=591056 00:28:19.237 00:12:49 -- nvmf/common.sh@471 -- # waitforlisten 591056 00:28:19.237 00:12:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:19.237 00:12:49 -- common/autotest_common.sh@817 -- # '[' -z 591056 ']' 00:28:19.237 00:12:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.237 00:12:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:19.237 00:12:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.237 00:12:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:19.237 00:12:49 -- common/autotest_common.sh@10 -- # set +x 00:28:19.237 [2024-04-27 00:12:49.258426] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:28:19.237 [2024-04-27 00:12:49.258489] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.237 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.237 [2024-04-27 00:12:49.329420] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.237 [2024-04-27 00:12:49.404635] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.237 [2024-04-27 00:12:49.404675] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.237 [2024-04-27 00:12:49.404683] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.237 [2024-04-27 00:12:49.404690] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.237 [2024-04-27 00:12:49.404696] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.237 [2024-04-27 00:12:49.404808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.237 [2024-04-27 00:12:49.404949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.237 [2024-04-27 00:12:49.405245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.237 [2024-04-27 00:12:49.405246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.244 00:12:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:20.244 00:12:50 -- common/autotest_common.sh@850 -- # return 0 00:28:20.244 00:12:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:20.244 00:12:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:20.244 00:12:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.244 00:12:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.244 00:12:50 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:20.244 00:12:50 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:20.244 00:12:50 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:20.244 00:12:50 -- scripts/common.sh@309 -- # local bdf bdfs 00:28:20.244 00:12:50 -- scripts/common.sh@310 -- # local nvmes 00:28:20.244 00:12:50 -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:28:20.244 00:12:50 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:20.244 00:12:50 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:20.244 00:12:50 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:28:20.244 00:12:50 -- scripts/common.sh@320 -- # uname -s 00:28:20.244 00:12:50 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:20.244 00:12:50 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:20.244 00:12:50 -- scripts/common.sh@325 -- # (( 1 )) 00:28:20.244 00:12:50 -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:28:20.244 00:12:50 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:20.244 00:12:50 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:28:20.244 00:12:50 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:20.244 00:12:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:20.244 00:12:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:20.244 00:12:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.244 ************************************ 00:28:20.244 START TEST spdk_target_abort 00:28:20.244 ************************************ 00:28:20.244 00:12:50 -- common/autotest_common.sh@1111 -- # spdk_target 00:28:20.244 00:12:50 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:20.244 00:12:50 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:28:20.244 00:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.244 00:12:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.505 spdk_targetn1 00:28:20.505 00:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:20.505 00:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.505 00:12:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.505 [2024-04-27 00:12:50.547957] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.505 00:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:20.505 00:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.505 00:12:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.505 00:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:20.505 00:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.505 00:12:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.505 00:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:20.505 00:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.505 00:12:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.505 [2024-04-27 00:12:50.588214] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.505 00:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:20.505 00:12:50 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:20.505 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.505 [2024-04-27 00:12:50.721319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:760 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:28:20.505 [2024-04-27 00:12:50.721344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0062 p:1 m:0 dnr:0 00:28:20.765 [2024-04-27 00:12:50.769889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2544 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:28:20.765 [2024-04-27 00:12:50.769912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.065 Initializing NVMe Controllers 00:28:24.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:24.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:24.065 Initialization complete. Launching workers. 00:28:24.065 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9187, failed: 2 00:28:24.065 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2546, failed to submit 6643 00:28:24.065 success 629, unsuccess 1917, failed 0 00:28:24.065 00:12:53 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:24.065 00:12:53 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:24.065 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.065 [2024-04-27 00:12:53.858056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200007c50000 PRP2 0x0 00:28:24.065 [2024-04-27 00:12:53.858098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:28:24.065 [2024-04-27 00:12:53.888934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:904 len:8 PRP1 0x200007c54000 PRP2 0x0 00:28:24.065 [2024-04-27 00:12:53.888960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0076 p:1 m:0 dnr:0 00:28:24.065 [2024-04-27 00:12:53.960934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:2392 len:8 PRP1 0x200007c40000 PRP2 0x0 00:28:24.065 [2024-04-27 00:12:53.960959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:27.369 [2024-04-27 00:12:56.885859] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaa360 is same with the state(5) to be set 00:28:27.369 [2024-04-27 00:12:56.885894] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaa360 is same with the state(5) to be set 00:28:27.369 [2024-04-27 00:12:56.885902] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaa360 is same with the state(5) to be set 00:28:27.369 [2024-04-27 00:12:56.885909] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaa360 is same with the state(5) to be set 00:28:27.369 [2024-04-27 00:12:56.885916] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaa360 is same with the state(5) to be set 00:28:27.369 [2024-04-27 00:12:56.885922] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaa360 is same with the state(5) to be set 00:28:27.369 [2024-04-27 00:12:56.885929] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaa360 is same with the state(5) to be set 00:28:27.369 [2024-04-27 00:12:56.885935] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaa360 is same with the state(5) to be set 00:28:27.369 [2024-04-27 00:12:56.885942] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdaa360 is same with the state(5) to be set 00:28:27.369 Initializing NVMe Controllers 00:28:27.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:27.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:27.369 Initialization complete. Launching workers. 00:28:27.369 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8566, failed: 3 00:28:27.369 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1209, failed to submit 7360 00:28:27.369 success 336, unsuccess 873, failed 0 00:28:27.369 00:12:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:27.369 00:12:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.369 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.669 Initializing NVMe Controllers 00:28:30.669 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:30.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:30.669 Initialization complete. Launching workers. 00:28:30.670 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43159, failed: 0 00:28:30.670 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2477, failed to submit 40682 00:28:30.670 success 632, unsuccess 1845, failed 0 00:28:30.670 00:13:00 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:30.670 00:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.670 00:13:00 -- common/autotest_common.sh@10 -- # set +x 00:28:30.670 00:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.670 00:13:00 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:30.670 00:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.670 00:13:00 -- common/autotest_common.sh@10 -- # set +x 00:28:32.051 00:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.051 00:13:02 -- target/abort_qd_sizes.sh@61 -- # killprocess 591056 00:28:32.051 00:13:02 -- common/autotest_common.sh@936 -- # '[' -z 591056 ']' 00:28:32.051 00:13:02 -- common/autotest_common.sh@940 -- # kill -0 591056 00:28:32.051 00:13:02 -- common/autotest_common.sh@941 -- # uname 00:28:32.051 00:13:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:32.051 00:13:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 591056 00:28:32.051 00:13:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:32.051 00:13:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:32.051 00:13:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 591056' 00:28:32.051 killing process with pid 591056 00:28:32.051 00:13:02 -- common/autotest_common.sh@955 -- # kill 591056 00:28:32.051 00:13:02 -- common/autotest_common.sh@960 -- # wait 591056 00:28:32.051 00:28:32.051 real 0m11.981s 00:28:32.051 user 0m49.242s 00:28:32.051 sys 0m1.772s 00:28:32.051 00:13:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:32.051 00:13:02 -- common/autotest_common.sh@10 -- # set +x 00:28:32.051 ************************************ 00:28:32.051 END TEST spdk_target_abort 00:28:32.051 ************************************ 00:28:32.051 00:13:02 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:32.051 00:13:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:32.051 00:13:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:32.051 00:13:02 -- common/autotest_common.sh@10 -- # set +x 00:28:32.312 ************************************ 00:28:32.312 START TEST kernel_target_abort 00:28:32.312 ************************************ 00:28:32.312 00:13:02 -- common/autotest_common.sh@1111 -- # kernel_target 00:28:32.312 00:13:02 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:32.312 00:13:02 -- nvmf/common.sh@717 -- # local ip 00:28:32.312 00:13:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:32.312 00:13:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:32.312 00:13:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.312 00:13:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.312 00:13:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:32.312 00:13:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.312 00:13:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:32.312 00:13:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:32.312 00:13:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:32.312 00:13:02 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:32.312 00:13:02 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:32.312 00:13:02 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:28:32.312 00:13:02 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:32.312 00:13:02 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:32.312 00:13:02 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:32.312 00:13:02 -- nvmf/common.sh@628 -- # local block nvme 00:28:32.312 00:13:02 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:28:32.312 00:13:02 -- nvmf/common.sh@631 -- # modprobe nvmet 00:28:32.312 00:13:02 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:32.312 00:13:02 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:35.611 Waiting for block devices as requested 00:28:35.871 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:35.871 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:35.871 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:36.170 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:36.170 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:36.170 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:36.170 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:36.430 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:36.430 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:36.430 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:36.690 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:36.690 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:36.690 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:36.950 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:36.950 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:36.950 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:36.950 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:37.211 00:13:07 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:37.211 00:13:07 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:37.211 00:13:07 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:28:37.211 00:13:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:37.211 00:13:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:37.211 00:13:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:37.211 00:13:07 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:28:37.211 00:13:07 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:37.211 00:13:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:37.471 No valid GPT data, bailing 00:28:37.471 00:13:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:37.471 00:13:07 -- scripts/common.sh@391 -- # pt= 00:28:37.471 00:13:07 -- scripts/common.sh@392 -- # return 1 00:28:37.471 00:13:07 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:28:37.471 00:13:07 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:28:37.471 00:13:07 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:37.471 00:13:07 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:37.471 00:13:07 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:37.471 00:13:07 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:37.471 00:13:07 -- nvmf/common.sh@656 -- # echo 1 00:28:37.471 00:13:07 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:28:37.471 00:13:07 -- nvmf/common.sh@658 -- # echo 1 00:28:37.471 00:13:07 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:28:37.471 00:13:07 -- nvmf/common.sh@661 -- # echo tcp 00:28:37.471 00:13:07 -- nvmf/common.sh@662 -- # echo 4420 00:28:37.471 00:13:07 -- nvmf/common.sh@663 -- # echo ipv4 00:28:37.471 00:13:07 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:37.471 00:13:07 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:37.471 00:28:37.471 Discovery Log Number of Records 2, Generation counter 2 00:28:37.471 =====Discovery Log Entry 0====== 00:28:37.471 trtype: tcp 00:28:37.471 adrfam: ipv4 00:28:37.471 subtype: current discovery subsystem 00:28:37.471 treq: not specified, sq flow control disable supported 00:28:37.471 portid: 1 00:28:37.471 trsvcid: 4420 00:28:37.471 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:37.471 traddr: 10.0.0.1 00:28:37.471 eflags: none 00:28:37.471 sectype: none 00:28:37.471 =====Discovery Log Entry 1====== 00:28:37.471 trtype: tcp 00:28:37.471 adrfam: ipv4 00:28:37.471 subtype: nvme subsystem 00:28:37.471 treq: not specified, sq flow control disable supported 00:28:37.471 portid: 1 00:28:37.471 trsvcid: 4420 00:28:37.471 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:37.471 traddr: 10.0.0.1 00:28:37.471 eflags: none 00:28:37.471 sectype: none 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:37.471 00:13:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:37.471 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.768 Initializing NVMe Controllers 00:28:40.768 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:40.768 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:40.768 Initialization complete. Launching workers. 00:28:40.768 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61733, failed: 0 00:28:40.768 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 61733, failed to submit 0 00:28:40.768 success 0, unsuccess 61733, failed 0 00:28:40.768 00:13:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:40.768 00:13:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:40.768 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.064 Initializing NVMe Controllers 00:28:44.064 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:44.064 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:44.064 Initialization complete. Launching workers. 00:28:44.064 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103158, failed: 0 00:28:44.064 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26022, failed to submit 77136 00:28:44.064 success 0, unsuccess 26022, failed 0 00:28:44.064 00:13:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:44.064 00:13:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:44.064 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.362 Initializing NVMe Controllers 00:28:47.362 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:47.362 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:47.362 Initialization complete. Launching workers. 00:28:47.362 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98637, failed: 0 00:28:47.362 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24670, failed to submit 73967 00:28:47.362 success 0, unsuccess 24670, failed 0 00:28:47.362 00:13:16 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:47.362 00:13:16 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:47.362 00:13:16 -- nvmf/common.sh@675 -- # echo 0 00:28:47.362 00:13:16 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:47.362 00:13:16 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:47.362 00:13:16 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:47.362 00:13:16 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:47.362 00:13:16 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:47.362 00:13:16 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:47.362 00:13:16 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:50.662 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:50.662 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:52.044 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:52.305 00:28:52.305 real 0m19.988s 00:28:52.305 user 0m9.229s 00:28:52.305 sys 0m6.147s 00:28:52.305 00:13:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:52.305 00:13:22 -- common/autotest_common.sh@10 -- # set +x 00:28:52.305 ************************************ 00:28:52.305 END TEST kernel_target_abort 00:28:52.305 ************************************ 00:28:52.305 00:13:22 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:52.305 00:13:22 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:52.305 00:13:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:52.305 00:13:22 -- nvmf/common.sh@117 -- # sync 00:28:52.305 00:13:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:52.305 00:13:22 -- nvmf/common.sh@120 -- # set +e 00:28:52.305 00:13:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:52.305 00:13:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:52.305 rmmod nvme_tcp 00:28:52.305 rmmod nvme_fabrics 00:28:52.305 rmmod nvme_keyring 00:28:52.305 00:13:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:52.305 00:13:22 -- nvmf/common.sh@124 -- # set -e 00:28:52.305 00:13:22 -- nvmf/common.sh@125 -- # return 0 00:28:52.305 00:13:22 -- nvmf/common.sh@478 -- # '[' -n 591056 ']' 00:28:52.305 00:13:22 -- nvmf/common.sh@479 -- # killprocess 591056 00:28:52.305 00:13:22 -- common/autotest_common.sh@936 -- # '[' -z 591056 ']' 00:28:52.305 00:13:22 -- common/autotest_common.sh@940 -- # kill -0 591056 00:28:52.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (591056) - No such process 00:28:52.305 00:13:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 591056 is not found' 00:28:52.305 Process with pid 591056 is not found 00:28:52.305 00:13:22 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:52.305 00:13:22 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:55.615 Waiting for block devices as requested 00:28:55.615 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:55.875 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:55.875 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:55.875 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:56.136 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:56.136 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:56.136 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:56.136 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:56.397 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:56.397 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:56.662 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:56.662 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:56.662 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:56.923 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:56.923 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:56.923 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:56.923 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:57.184 00:13:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:57.184 00:13:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:57.184 00:13:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:57.184 00:13:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:57.184 00:13:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.184 00:13:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:57.184 00:13:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.801 00:13:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:59.801 00:28:59.801 real 0m51.019s 00:28:59.801 user 1m3.474s 00:28:59.801 sys 0m18.434s 00:28:59.801 00:13:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:59.801 00:13:29 -- common/autotest_common.sh@10 -- # set +x 00:28:59.801 ************************************ 00:28:59.801 END TEST nvmf_abort_qd_sizes 00:28:59.801 ************************************ 00:28:59.801 00:13:29 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:59.801 00:13:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:59.801 00:13:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:59.801 00:13:29 -- common/autotest_common.sh@10 -- # set +x 00:28:59.801 ************************************ 00:28:59.801 START TEST keyring_file 00:28:59.801 ************************************ 00:28:59.801 00:13:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:59.801 * Looking for test storage... 00:28:59.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:59.801 00:13:29 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:59.801 00:13:29 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.801 00:13:29 -- nvmf/common.sh@7 -- # uname -s 00:28:59.801 00:13:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.801 00:13:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.801 00:13:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.801 00:13:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.801 00:13:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.801 00:13:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.801 00:13:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.801 00:13:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.801 00:13:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.801 00:13:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.801 00:13:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:59.801 00:13:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:59.801 00:13:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.801 00:13:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.801 00:13:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.801 00:13:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.801 00:13:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.801 00:13:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.801 00:13:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.801 00:13:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.801 00:13:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.801 00:13:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.801 00:13:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.801 00:13:29 -- paths/export.sh@5 -- # export PATH 00:28:59.801 00:13:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.801 00:13:29 -- nvmf/common.sh@47 -- # : 0 00:28:59.801 00:13:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:59.801 00:13:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:59.801 00:13:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.801 00:13:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.801 00:13:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.801 00:13:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:59.801 00:13:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:59.801 00:13:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:59.801 00:13:29 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:59.801 00:13:29 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:59.801 00:13:29 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:59.801 00:13:29 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:59.801 00:13:29 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:59.801 00:13:29 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:59.801 00:13:29 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:59.801 00:13:29 -- keyring/common.sh@15 -- # local name key digest path 00:28:59.801 00:13:29 -- keyring/common.sh@17 -- # name=key0 00:28:59.801 00:13:29 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:59.801 00:13:29 -- keyring/common.sh@17 -- # digest=0 00:28:59.801 00:13:29 -- keyring/common.sh@18 -- # mktemp 00:28:59.801 00:13:29 -- keyring/common.sh@18 -- # path=/tmp/tmp.J2k1iwaZBx 00:28:59.801 00:13:29 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:59.801 00:13:29 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:59.801 00:13:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:28:59.801 00:13:29 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:28:59.801 00:13:29 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:28:59.801 00:13:29 -- nvmf/common.sh@693 -- # digest=0 00:28:59.801 00:13:29 -- nvmf/common.sh@694 -- # python - 00:28:59.801 00:13:29 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.J2k1iwaZBx 00:28:59.801 00:13:29 -- keyring/common.sh@23 -- # echo /tmp/tmp.J2k1iwaZBx 00:28:59.801 00:13:29 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.J2k1iwaZBx 00:28:59.801 00:13:29 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:59.801 00:13:29 -- keyring/common.sh@15 -- # local name key digest path 00:28:59.801 00:13:29 -- keyring/common.sh@17 -- # name=key1 00:28:59.801 00:13:29 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:59.801 00:13:29 -- keyring/common.sh@17 -- # digest=0 00:28:59.801 00:13:29 -- keyring/common.sh@18 -- # mktemp 00:28:59.801 00:13:29 -- keyring/common.sh@18 -- # path=/tmp/tmp.RBqxs5H3mN 00:28:59.801 00:13:29 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:59.801 00:13:29 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:59.801 00:13:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:28:59.801 00:13:29 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:28:59.801 00:13:29 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:28:59.801 00:13:29 -- nvmf/common.sh@693 -- # digest=0 00:28:59.801 00:13:29 -- nvmf/common.sh@694 -- # python - 00:28:59.801 00:13:29 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RBqxs5H3mN 00:28:59.801 00:13:29 -- keyring/common.sh@23 -- # echo /tmp/tmp.RBqxs5H3mN 00:28:59.801 00:13:29 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.RBqxs5H3mN 00:28:59.801 00:13:29 -- keyring/file.sh@30 -- # tgtpid=601365 00:28:59.801 00:13:29 -- keyring/file.sh@32 -- # waitforlisten 601365 00:28:59.801 00:13:29 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:59.801 00:13:29 -- common/autotest_common.sh@817 -- # '[' -z 601365 ']' 00:28:59.801 00:13:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.801 00:13:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:59.801 00:13:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.801 00:13:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:59.801 00:13:29 -- common/autotest_common.sh@10 -- # set +x 00:28:59.801 [2024-04-27 00:13:29.976927] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:28:59.801 [2024-04-27 00:13:29.977000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601365 ] 00:28:59.801 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.062 [2024-04-27 00:13:30.043400] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.062 [2024-04-27 00:13:30.121054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.633 00:13:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:00.633 00:13:30 -- common/autotest_common.sh@850 -- # return 0 00:29:00.633 00:13:30 -- keyring/file.sh@33 -- # rpc_cmd 00:29:00.633 00:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.633 00:13:30 -- common/autotest_common.sh@10 -- # set +x 00:29:00.633 [2024-04-27 00:13:30.741846] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.633 null0 00:29:00.633 [2024-04-27 00:13:30.773903] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:00.633 [2024-04-27 00:13:30.774173] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:00.633 [2024-04-27 00:13:30.781926] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:00.633 00:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.633 00:13:30 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:00.633 00:13:30 -- common/autotest_common.sh@638 -- # local es=0 00:29:00.633 00:13:30 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:00.633 00:13:30 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:00.633 00:13:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:00.633 00:13:30 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:00.633 00:13:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:00.633 00:13:30 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:00.633 00:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.633 00:13:30 -- common/autotest_common.sh@10 -- # set +x 00:29:00.633 [2024-04-27 00:13:30.797955] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:29:00.633 { 00:29:00.633 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.633 "secure_channel": false, 00:29:00.633 "listen_address": { 00:29:00.633 "trtype": "tcp", 00:29:00.633 "traddr": "127.0.0.1", 00:29:00.633 "trsvcid": "4420" 00:29:00.633 }, 00:29:00.633 "method": "nvmf_subsystem_add_listener", 00:29:00.633 "req_id": 1 00:29:00.633 } 00:29:00.633 Got JSON-RPC error response 00:29:00.633 response: 00:29:00.633 { 00:29:00.633 "code": -32602, 00:29:00.633 "message": "Invalid parameters" 00:29:00.633 } 00:29:00.633 00:13:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:00.633 00:13:30 -- common/autotest_common.sh@641 -- # es=1 00:29:00.633 00:13:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:00.633 00:13:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:00.633 00:13:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:00.633 00:13:30 -- keyring/file.sh@46 -- # bperfpid=601442 00:29:00.633 00:13:30 -- keyring/file.sh@48 -- # waitforlisten 601442 /var/tmp/bperf.sock 00:29:00.633 00:13:30 -- common/autotest_common.sh@817 -- # '[' -z 601442 ']' 00:29:00.633 00:13:30 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:00.633 00:13:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.633 00:13:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:00.633 00:13:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.633 00:13:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:00.633 00:13:30 -- common/autotest_common.sh@10 -- # set +x 00:29:00.893 [2024-04-27 00:13:30.854148] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:29:00.893 [2024-04-27 00:13:30.854194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601442 ] 00:29:00.893 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.893 [2024-04-27 00:13:30.912079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.893 [2024-04-27 00:13:30.976159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.464 00:13:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:01.464 00:13:31 -- common/autotest_common.sh@850 -- # return 0 00:29:01.464 00:13:31 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.J2k1iwaZBx 00:29:01.464 00:13:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.J2k1iwaZBx 00:29:01.724 00:13:31 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RBqxs5H3mN 00:29:01.724 00:13:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RBqxs5H3mN 00:29:01.724 00:13:31 -- keyring/file.sh@51 -- # get_key key0 00:29:01.724 00:13:31 -- keyring/file.sh@51 -- # jq -r .path 00:29:01.724 00:13:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:01.724 00:13:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:01.724 00:13:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:01.984 00:13:32 -- keyring/file.sh@51 -- # [[ /tmp/tmp.J2k1iwaZBx == \/\t\m\p\/\t\m\p\.\J\2\k\1\i\w\a\Z\B\x ]] 00:29:01.984 00:13:32 -- keyring/file.sh@52 -- # get_key key1 00:29:01.984 00:13:32 -- keyring/file.sh@52 -- # jq -r .path 00:29:01.984 00:13:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:01.984 00:13:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:01.984 00:13:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.244 00:13:32 -- keyring/file.sh@52 -- # [[ /tmp/tmp.RBqxs5H3mN == \/\t\m\p\/\t\m\p\.\R\B\q\x\s\5\H\3\m\N ]] 00:29:02.244 00:13:32 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:02.244 00:13:32 -- keyring/common.sh@12 -- # get_key key0 00:29:02.244 00:13:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.244 00:13:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.244 00:13:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:02.244 00:13:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.244 00:13:32 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:02.244 00:13:32 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:02.244 00:13:32 -- keyring/common.sh@12 -- # get_key key1 00:29:02.244 00:13:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.244 00:13:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.244 00:13:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.244 00:13:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:02.506 00:13:32 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:02.506 00:13:32 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:02.506 00:13:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:02.506 [2024-04-27 00:13:32.700865] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:02.768 nvme0n1 00:29:02.768 00:13:32 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:02.768 00:13:32 -- keyring/common.sh@12 -- # get_key key0 00:29:02.768 00:13:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.768 00:13:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.768 00:13:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:02.768 00:13:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.768 00:13:32 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:02.768 00:13:32 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:02.768 00:13:32 -- keyring/common.sh@12 -- # get_key key1 00:29:02.768 00:13:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.768 00:13:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.768 00:13:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:02.768 00:13:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.028 00:13:33 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:03.028 00:13:33 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.028 Running I/O for 1 seconds... 00:29:04.414 00:29:04.414 Latency(us) 00:29:04.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.414 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:04.414 nvme0n1 : 1.01 11916.74 46.55 0.00 0.00 10701.96 6007.47 19223.89 00:29:04.414 =================================================================================================================== 00:29:04.414 Total : 11916.74 46.55 0.00 0.00 10701.96 6007.47 19223.89 00:29:04.414 0 00:29:04.414 00:13:34 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:04.414 00:13:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:04.414 00:13:34 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:04.414 00:13:34 -- keyring/common.sh@12 -- # get_key key0 00:29:04.414 00:13:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.414 00:13:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.414 00:13:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.414 00:13:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:04.414 00:13:34 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:04.414 00:13:34 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:04.414 00:13:34 -- keyring/common.sh@12 -- # get_key key1 00:29:04.414 00:13:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.414 00:13:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.414 00:13:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:04.414 00:13:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.675 00:13:34 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:04.675 00:13:34 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:04.675 00:13:34 -- common/autotest_common.sh@638 -- # local es=0 00:29:04.675 00:13:34 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:04.675 00:13:34 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:04.675 00:13:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:04.675 00:13:34 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:04.675 00:13:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:04.675 00:13:34 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:04.675 00:13:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:04.675 [2024-04-27 00:13:34.859636] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:04.675 [2024-04-27 00:13:34.860484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1949120 (107): Transport endpoint is not connected 00:29:04.675 [2024-04-27 00:13:34.861477] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1949120 (9): Bad file descriptor 00:29:04.675 [2024-04-27 00:13:34.862479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:04.676 [2024-04-27 00:13:34.862489] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:04.676 [2024-04-27 00:13:34.862497] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:04.676 request: 00:29:04.676 { 00:29:04.676 "name": "nvme0", 00:29:04.676 "trtype": "tcp", 00:29:04.676 "traddr": "127.0.0.1", 00:29:04.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:04.676 "adrfam": "ipv4", 00:29:04.676 "trsvcid": "4420", 00:29:04.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.676 "psk": "key1", 00:29:04.676 "method": "bdev_nvme_attach_controller", 00:29:04.676 "req_id": 1 00:29:04.676 } 00:29:04.676 Got JSON-RPC error response 00:29:04.676 response: 00:29:04.676 { 00:29:04.676 "code": -32602, 00:29:04.676 "message": "Invalid parameters" 00:29:04.676 } 00:29:04.676 00:13:34 -- common/autotest_common.sh@641 -- # es=1 00:29:04.676 00:13:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:04.676 00:13:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:04.676 00:13:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:04.676 00:13:34 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:04.676 00:13:34 -- keyring/common.sh@12 -- # get_key key0 00:29:04.676 00:13:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.676 00:13:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.676 00:13:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:04.676 00:13:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.937 00:13:35 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:04.937 00:13:35 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:04.937 00:13:35 -- keyring/common.sh@12 -- # get_key key1 00:29:04.937 00:13:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.937 00:13:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.937 00:13:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:04.937 00:13:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.199 00:13:35 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:05.199 00:13:35 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:05.199 00:13:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:05.199 00:13:35 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:05.199 00:13:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:05.460 00:13:35 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:05.460 00:13:35 -- keyring/file.sh@77 -- # jq length 00:29:05.460 00:13:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.720 00:13:35 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:05.720 00:13:35 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.J2k1iwaZBx 00:29:05.720 00:13:35 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.J2k1iwaZBx 00:29:05.720 00:13:35 -- common/autotest_common.sh@638 -- # local es=0 00:29:05.720 00:13:35 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.J2k1iwaZBx 00:29:05.720 00:13:35 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:05.720 00:13:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:05.720 00:13:35 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:05.721 00:13:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:05.721 00:13:35 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.J2k1iwaZBx 00:29:05.721 00:13:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.J2k1iwaZBx 00:29:05.721 [2024-04-27 00:13:35.848246] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.J2k1iwaZBx': 0100660 00:29:05.721 [2024-04-27 00:13:35.848269] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:05.721 request: 00:29:05.721 { 00:29:05.721 "name": "key0", 00:29:05.721 "path": "/tmp/tmp.J2k1iwaZBx", 00:29:05.721 "method": "keyring_file_add_key", 00:29:05.721 "req_id": 1 00:29:05.721 } 00:29:05.721 Got JSON-RPC error response 00:29:05.721 response: 00:29:05.721 { 00:29:05.721 "code": -1, 00:29:05.721 "message": "Operation not permitted" 00:29:05.721 } 00:29:05.721 00:13:35 -- common/autotest_common.sh@641 -- # es=1 00:29:05.721 00:13:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:05.721 00:13:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:05.721 00:13:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:05.721 00:13:35 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.J2k1iwaZBx 00:29:05.721 00:13:35 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.J2k1iwaZBx 00:29:05.721 00:13:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.J2k1iwaZBx 00:29:05.982 00:13:36 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.J2k1iwaZBx 00:29:05.982 00:13:36 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:05.982 00:13:36 -- keyring/common.sh@12 -- # get_key key0 00:29:05.982 00:13:36 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:05.982 00:13:36 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.982 00:13:36 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:05.982 00:13:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.982 00:13:36 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:05.982 00:13:36 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.982 00:13:36 -- common/autotest_common.sh@638 -- # local es=0 00:29:05.982 00:13:36 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.982 00:13:36 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:05.982 00:13:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:05.982 00:13:36 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:05.982 00:13:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:05.982 00:13:36 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.982 00:13:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:06.244 [2024-04-27 00:13:36.309438] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.J2k1iwaZBx': No such file or directory 00:29:06.244 [2024-04-27 00:13:36.309457] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:06.244 [2024-04-27 00:13:36.309480] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:06.244 [2024-04-27 00:13:36.309486] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:06.244 [2024-04-27 00:13:36.309493] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:06.244 request: 00:29:06.244 { 00:29:06.244 "name": "nvme0", 00:29:06.244 "trtype": "tcp", 00:29:06.244 "traddr": "127.0.0.1", 00:29:06.244 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:06.244 "adrfam": "ipv4", 00:29:06.244 "trsvcid": "4420", 00:29:06.244 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:06.244 "psk": "key0", 00:29:06.244 "method": "bdev_nvme_attach_controller", 00:29:06.244 "req_id": 1 00:29:06.244 } 00:29:06.244 Got JSON-RPC error response 00:29:06.244 response: 00:29:06.244 { 00:29:06.244 "code": -19, 00:29:06.244 "message": "No such device" 00:29:06.244 } 00:29:06.244 00:13:36 -- common/autotest_common.sh@641 -- # es=1 00:29:06.244 00:13:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:06.244 00:13:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:06.244 00:13:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:06.244 00:13:36 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:06.244 00:13:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:06.505 00:13:36 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:06.505 00:13:36 -- keyring/common.sh@15 -- # local name key digest path 00:29:06.505 00:13:36 -- keyring/common.sh@17 -- # name=key0 00:29:06.505 00:13:36 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:06.505 00:13:36 -- keyring/common.sh@17 -- # digest=0 00:29:06.505 00:13:36 -- keyring/common.sh@18 -- # mktemp 00:29:06.505 00:13:36 -- keyring/common.sh@18 -- # path=/tmp/tmp.vBWusrts99 00:29:06.505 00:13:36 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:06.505 00:13:36 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:06.505 00:13:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:06.505 00:13:36 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:06.505 00:13:36 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:06.505 00:13:36 -- nvmf/common.sh@693 -- # digest=0 00:29:06.505 00:13:36 -- nvmf/common.sh@694 -- # python - 00:29:06.505 00:13:36 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vBWusrts99 00:29:06.505 00:13:36 -- keyring/common.sh@23 -- # echo /tmp/tmp.vBWusrts99 00:29:06.505 00:13:36 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.vBWusrts99 00:29:06.505 00:13:36 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vBWusrts99 00:29:06.505 00:13:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vBWusrts99 00:29:06.767 00:13:36 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:06.767 00:13:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:06.767 nvme0n1 00:29:06.767 00:13:36 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:06.767 00:13:36 -- keyring/common.sh@12 -- # get_key key0 00:29:06.767 00:13:36 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:06.767 00:13:36 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.767 00:13:36 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:06.767 00:13:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.028 00:13:37 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:07.028 00:13:37 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:07.028 00:13:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:07.290 00:13:37 -- keyring/file.sh@101 -- # get_key key0 00:29:07.290 00:13:37 -- keyring/file.sh@101 -- # jq -r .removed 00:29:07.290 00:13:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.290 00:13:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.290 00:13:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.290 00:13:37 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:07.290 00:13:37 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:07.290 00:13:37 -- keyring/common.sh@12 -- # get_key key0 00:29:07.290 00:13:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.290 00:13:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.290 00:13:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.290 00:13:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.551 00:13:37 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:07.551 00:13:37 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:07.551 00:13:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:07.813 00:13:37 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:07.813 00:13:37 -- keyring/file.sh@104 -- # jq length 00:29:07.813 00:13:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.813 00:13:37 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:07.813 00:13:37 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vBWusrts99 00:29:07.813 00:13:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vBWusrts99 00:29:08.075 00:13:38 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RBqxs5H3mN 00:29:08.075 00:13:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RBqxs5H3mN 00:29:08.075 00:13:38 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.075 00:13:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.337 nvme0n1 00:29:08.337 00:13:38 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:08.337 00:13:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:08.599 00:13:38 -- keyring/file.sh@112 -- # config='{ 00:29:08.599 "subsystems": [ 00:29:08.599 { 00:29:08.599 "subsystem": "keyring", 00:29:08.599 "config": [ 00:29:08.599 { 00:29:08.599 "method": "keyring_file_add_key", 00:29:08.599 "params": { 00:29:08.599 "name": "key0", 00:29:08.599 "path": "/tmp/tmp.vBWusrts99" 00:29:08.599 } 00:29:08.599 }, 00:29:08.599 { 00:29:08.599 "method": "keyring_file_add_key", 00:29:08.599 "params": { 00:29:08.599 "name": "key1", 00:29:08.599 "path": "/tmp/tmp.RBqxs5H3mN" 00:29:08.599 } 00:29:08.599 } 00:29:08.599 ] 00:29:08.599 }, 00:29:08.599 { 00:29:08.599 "subsystem": "iobuf", 00:29:08.599 "config": [ 00:29:08.599 { 00:29:08.599 "method": "iobuf_set_options", 00:29:08.599 "params": { 00:29:08.599 "small_pool_count": 8192, 00:29:08.599 "large_pool_count": 1024, 00:29:08.599 "small_bufsize": 8192, 00:29:08.599 "large_bufsize": 135168 00:29:08.599 } 00:29:08.599 } 00:29:08.599 ] 00:29:08.599 }, 00:29:08.599 { 00:29:08.599 "subsystem": "sock", 00:29:08.599 "config": [ 00:29:08.599 { 00:29:08.599 "method": "sock_impl_set_options", 00:29:08.599 "params": { 00:29:08.599 "impl_name": "posix", 00:29:08.599 "recv_buf_size": 2097152, 00:29:08.599 "send_buf_size": 2097152, 00:29:08.599 "enable_recv_pipe": true, 00:29:08.599 "enable_quickack": false, 00:29:08.599 "enable_placement_id": 0, 00:29:08.599 "enable_zerocopy_send_server": true, 00:29:08.599 "enable_zerocopy_send_client": false, 00:29:08.599 "zerocopy_threshold": 0, 00:29:08.599 "tls_version": 0, 00:29:08.599 "enable_ktls": false 00:29:08.599 } 00:29:08.599 }, 00:29:08.599 { 00:29:08.599 "method": "sock_impl_set_options", 00:29:08.599 "params": { 00:29:08.599 "impl_name": "ssl", 00:29:08.599 "recv_buf_size": 4096, 00:29:08.599 "send_buf_size": 4096, 00:29:08.599 "enable_recv_pipe": true, 00:29:08.599 "enable_quickack": false, 00:29:08.599 "enable_placement_id": 0, 00:29:08.599 "enable_zerocopy_send_server": true, 00:29:08.599 "enable_zerocopy_send_client": false, 00:29:08.599 "zerocopy_threshold": 0, 00:29:08.599 "tls_version": 0, 00:29:08.599 "enable_ktls": false 00:29:08.599 } 00:29:08.599 } 00:29:08.599 ] 00:29:08.599 }, 00:29:08.599 { 00:29:08.599 "subsystem": "vmd", 00:29:08.599 "config": [] 00:29:08.599 }, 00:29:08.599 { 00:29:08.599 "subsystem": "accel", 00:29:08.599 "config": [ 00:29:08.599 { 00:29:08.599 "method": "accel_set_options", 00:29:08.599 "params": { 00:29:08.599 "small_cache_size": 128, 00:29:08.599 "large_cache_size": 16, 00:29:08.599 "task_count": 2048, 00:29:08.599 "sequence_count": 2048, 00:29:08.599 "buf_count": 2048 00:29:08.599 } 00:29:08.599 } 00:29:08.599 ] 00:29:08.599 }, 00:29:08.599 { 00:29:08.599 "subsystem": "bdev", 00:29:08.599 "config": [ 00:29:08.599 { 00:29:08.599 "method": "bdev_set_options", 00:29:08.599 "params": { 00:29:08.599 "bdev_io_pool_size": 65535, 00:29:08.599 "bdev_io_cache_size": 256, 00:29:08.599 "bdev_auto_examine": true, 00:29:08.599 "iobuf_small_cache_size": 128, 00:29:08.599 "iobuf_large_cache_size": 16 00:29:08.599 } 00:29:08.599 }, 00:29:08.599 { 00:29:08.599 "method": "bdev_raid_set_options", 00:29:08.599 "params": { 00:29:08.599 "process_window_size_kb": 1024 00:29:08.599 } 00:29:08.599 }, 00:29:08.599 { 00:29:08.599 "method": "bdev_iscsi_set_options", 00:29:08.599 "params": { 00:29:08.599 "timeout_sec": 30 00:29:08.599 } 00:29:08.599 }, 00:29:08.599 { 00:29:08.599 "method": "bdev_nvme_set_options", 00:29:08.599 "params": { 00:29:08.599 "action_on_timeout": "none", 00:29:08.599 "timeout_us": 0, 00:29:08.599 "timeout_admin_us": 0, 00:29:08.599 "keep_alive_timeout_ms": 10000, 00:29:08.599 "arbitration_burst": 0, 00:29:08.599 "low_priority_weight": 0, 00:29:08.599 "medium_priority_weight": 0, 00:29:08.599 "high_priority_weight": 0, 00:29:08.599 "nvme_adminq_poll_period_us": 10000, 00:29:08.599 "nvme_ioq_poll_period_us": 0, 00:29:08.599 "io_queue_requests": 512, 00:29:08.599 "delay_cmd_submit": true, 00:29:08.599 "transport_retry_count": 4, 00:29:08.599 "bdev_retry_count": 3, 00:29:08.599 "transport_ack_timeout": 0, 00:29:08.600 "ctrlr_loss_timeout_sec": 0, 00:29:08.600 "reconnect_delay_sec": 0, 00:29:08.600 "fast_io_fail_timeout_sec": 0, 00:29:08.600 "disable_auto_failback": false, 00:29:08.600 "generate_uuids": false, 00:29:08.600 "transport_tos": 0, 00:29:08.600 "nvme_error_stat": false, 00:29:08.600 "rdma_srq_size": 0, 00:29:08.600 "io_path_stat": false, 00:29:08.600 "allow_accel_sequence": false, 00:29:08.600 "rdma_max_cq_size": 0, 00:29:08.600 "rdma_cm_event_timeout_ms": 0, 00:29:08.600 "dhchap_digests": [ 00:29:08.600 "sha256", 00:29:08.600 "sha384", 00:29:08.600 "sha512" 00:29:08.600 ], 00:29:08.600 "dhchap_dhgroups": [ 00:29:08.600 "null", 00:29:08.600 "ffdhe2048", 00:29:08.600 "ffdhe3072", 00:29:08.600 "ffdhe4096", 00:29:08.600 "ffdhe6144", 00:29:08.600 "ffdhe8192" 00:29:08.600 ] 00:29:08.600 } 00:29:08.600 }, 00:29:08.600 { 00:29:08.600 "method": "bdev_nvme_attach_controller", 00:29:08.600 "params": { 00:29:08.600 "name": "nvme0", 00:29:08.600 "trtype": "TCP", 00:29:08.600 "adrfam": "IPv4", 00:29:08.600 "traddr": "127.0.0.1", 00:29:08.600 "trsvcid": "4420", 00:29:08.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.600 "prchk_reftag": false, 00:29:08.600 "prchk_guard": false, 00:29:08.600 "ctrlr_loss_timeout_sec": 0, 00:29:08.600 "reconnect_delay_sec": 0, 00:29:08.600 "fast_io_fail_timeout_sec": 0, 00:29:08.600 "psk": "key0", 00:29:08.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:08.600 "hdgst": false, 00:29:08.600 "ddgst": false 00:29:08.600 } 00:29:08.600 }, 00:29:08.600 { 00:29:08.600 "method": "bdev_nvme_set_hotplug", 00:29:08.600 "params": { 00:29:08.600 "period_us": 100000, 00:29:08.600 "enable": false 00:29:08.600 } 00:29:08.600 }, 00:29:08.600 { 00:29:08.600 "method": "bdev_wait_for_examine" 00:29:08.600 } 00:29:08.600 ] 00:29:08.600 }, 00:29:08.600 { 00:29:08.600 "subsystem": "nbd", 00:29:08.600 "config": [] 00:29:08.600 } 00:29:08.600 ] 00:29:08.600 }' 00:29:08.600 00:13:38 -- keyring/file.sh@114 -- # killprocess 601442 00:29:08.600 00:13:38 -- common/autotest_common.sh@936 -- # '[' -z 601442 ']' 00:29:08.600 00:13:38 -- common/autotest_common.sh@940 -- # kill -0 601442 00:29:08.600 00:13:38 -- common/autotest_common.sh@941 -- # uname 00:29:08.600 00:13:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:08.600 00:13:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 601442 00:29:08.600 00:13:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:08.600 00:13:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:08.600 00:13:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 601442' 00:29:08.600 killing process with pid 601442 00:29:08.600 00:13:38 -- common/autotest_common.sh@955 -- # kill 601442 00:29:08.600 Received shutdown signal, test time was about 1.000000 seconds 00:29:08.600 00:29:08.600 Latency(us) 00:29:08.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.600 =================================================================================================================== 00:29:08.600 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.600 00:13:38 -- common/autotest_common.sh@960 -- # wait 601442 00:29:08.862 00:13:38 -- keyring/file.sh@117 -- # bperfpid=603242 00:29:08.862 00:13:38 -- keyring/file.sh@119 -- # waitforlisten 603242 /var/tmp/bperf.sock 00:29:08.862 00:13:38 -- common/autotest_common.sh@817 -- # '[' -z 603242 ']' 00:29:08.862 00:13:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.862 00:13:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:08.862 00:13:38 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:08.862 00:13:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.862 00:13:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:08.862 00:13:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.862 00:13:38 -- keyring/file.sh@115 -- # echo '{ 00:29:08.862 "subsystems": [ 00:29:08.862 { 00:29:08.862 "subsystem": "keyring", 00:29:08.862 "config": [ 00:29:08.862 { 00:29:08.862 "method": "keyring_file_add_key", 00:29:08.862 "params": { 00:29:08.862 "name": "key0", 00:29:08.862 "path": "/tmp/tmp.vBWusrts99" 00:29:08.862 } 00:29:08.862 }, 00:29:08.862 { 00:29:08.862 "method": "keyring_file_add_key", 00:29:08.862 "params": { 00:29:08.862 "name": "key1", 00:29:08.862 "path": "/tmp/tmp.RBqxs5H3mN" 00:29:08.862 } 00:29:08.862 } 00:29:08.862 ] 00:29:08.862 }, 00:29:08.862 { 00:29:08.862 "subsystem": "iobuf", 00:29:08.862 "config": [ 00:29:08.862 { 00:29:08.862 "method": "iobuf_set_options", 00:29:08.862 "params": { 00:29:08.862 "small_pool_count": 8192, 00:29:08.862 "large_pool_count": 1024, 00:29:08.862 "small_bufsize": 8192, 00:29:08.862 "large_bufsize": 135168 00:29:08.862 } 00:29:08.862 } 00:29:08.862 ] 00:29:08.862 }, 00:29:08.862 { 00:29:08.862 "subsystem": "sock", 00:29:08.862 "config": [ 00:29:08.862 { 00:29:08.862 "method": "sock_impl_set_options", 00:29:08.862 "params": { 00:29:08.862 "impl_name": "posix", 00:29:08.862 "recv_buf_size": 2097152, 00:29:08.862 "send_buf_size": 2097152, 00:29:08.862 "enable_recv_pipe": true, 00:29:08.862 "enable_quickack": false, 00:29:08.862 "enable_placement_id": 0, 00:29:08.862 "enable_zerocopy_send_server": true, 00:29:08.862 "enable_zerocopy_send_client": false, 00:29:08.862 "zerocopy_threshold": 0, 00:29:08.862 "tls_version": 0, 00:29:08.862 "enable_ktls": false 00:29:08.862 } 00:29:08.862 }, 00:29:08.862 { 00:29:08.862 "method": "sock_impl_set_options", 00:29:08.862 "params": { 00:29:08.862 "impl_name": "ssl", 00:29:08.862 "recv_buf_size": 4096, 00:29:08.862 "send_buf_size": 4096, 00:29:08.862 "enable_recv_pipe": true, 00:29:08.862 "enable_quickack": false, 00:29:08.862 "enable_placement_id": 0, 00:29:08.862 "enable_zerocopy_send_server": true, 00:29:08.862 "enable_zerocopy_send_client": false, 00:29:08.862 "zerocopy_threshold": 0, 00:29:08.862 "tls_version": 0, 00:29:08.862 "enable_ktls": false 00:29:08.862 } 00:29:08.862 } 00:29:08.862 ] 00:29:08.862 }, 00:29:08.862 { 00:29:08.862 "subsystem": "vmd", 00:29:08.862 "config": [] 00:29:08.862 }, 00:29:08.862 { 00:29:08.862 "subsystem": "accel", 00:29:08.862 "config": [ 00:29:08.862 { 00:29:08.862 "method": "accel_set_options", 00:29:08.862 "params": { 00:29:08.862 "small_cache_size": 128, 00:29:08.862 "large_cache_size": 16, 00:29:08.862 "task_count": 2048, 00:29:08.862 "sequence_count": 2048, 00:29:08.862 "buf_count": 2048 00:29:08.862 } 00:29:08.862 } 00:29:08.862 ] 00:29:08.862 }, 00:29:08.863 { 00:29:08.863 "subsystem": "bdev", 00:29:08.863 "config": [ 00:29:08.863 { 00:29:08.863 "method": "bdev_set_options", 00:29:08.863 "params": { 00:29:08.863 "bdev_io_pool_size": 65535, 00:29:08.863 "bdev_io_cache_size": 256, 00:29:08.863 "bdev_auto_examine": true, 00:29:08.863 "iobuf_small_cache_size": 128, 00:29:08.863 "iobuf_large_cache_size": 16 00:29:08.863 } 00:29:08.863 }, 00:29:08.863 { 00:29:08.863 "method": "bdev_raid_set_options", 00:29:08.863 "params": { 00:29:08.863 "process_window_size_kb": 1024 00:29:08.863 } 00:29:08.863 }, 00:29:08.863 { 00:29:08.863 "method": "bdev_iscsi_set_options", 00:29:08.863 "params": { 00:29:08.863 "timeout_sec": 30 00:29:08.863 } 00:29:08.863 }, 00:29:08.863 { 00:29:08.863 "method": "bdev_nvme_set_options", 00:29:08.863 "params": { 00:29:08.863 "action_on_timeout": "none", 00:29:08.863 "timeout_us": 0, 00:29:08.863 "timeout_admin_us": 0, 00:29:08.863 "keep_alive_timeout_ms": 10000, 00:29:08.863 "arbitration_burst": 0, 00:29:08.863 "low_priority_weight": 0, 00:29:08.863 "medium_priority_weight": 0, 00:29:08.863 "high_priority_weight": 0, 00:29:08.863 "nvme_adminq_poll_period_us": 10000, 00:29:08.863 "nvme_ioq_poll_period_us": 0, 00:29:08.863 "io_queue_requests": 512, 00:29:08.863 "delay_cmd_submit": true, 00:29:08.863 "transport_retry_count": 4, 00:29:08.863 "bdev_retry_count": 3, 00:29:08.863 "transport_ack_timeout": 0, 00:29:08.863 "ctrlr_loss_timeout_sec": 0, 00:29:08.863 "reconnect_delay_sec": 0, 00:29:08.863 "fast_io_fail_timeout_sec": 0, 00:29:08.863 "disable_auto_failback": false, 00:29:08.863 "generate_uuids": false, 00:29:08.863 "transport_tos": 0, 00:29:08.863 "nvme_error_stat": false, 00:29:08.863 "rdma_srq_size": 0, 00:29:08.863 "io_path_stat": false, 00:29:08.863 "allow_accel_sequence": false, 00:29:08.863 "rdma_max_cq_size": 0, 00:29:08.863 "rdma_cm_event_timeout_ms": 0, 00:29:08.863 "dhchap_digests": [ 00:29:08.863 "sha256", 00:29:08.863 "sha384", 00:29:08.863 "sha512" 00:29:08.863 ], 00:29:08.863 "dhchap_dhgroups": [ 00:29:08.863 "null", 00:29:08.863 "ffdhe2048", 00:29:08.863 "ffdhe3072", 00:29:08.863 "ffdhe4096", 00:29:08.863 "ffdhe6144", 00:29:08.863 "ffdhe8192" 00:29:08.863 ] 00:29:08.863 } 00:29:08.863 }, 00:29:08.863 { 00:29:08.863 "method": "bdev_nvme_attach_controller", 00:29:08.863 "params": { 00:29:08.863 "name": "nvme0", 00:29:08.863 "trtype": "TCP", 00:29:08.863 "adrfam": "IPv4", 00:29:08.863 "traddr": "127.0.0.1", 00:29:08.863 "trsvcid": "4420", 00:29:08.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.863 "prchk_reftag": false, 00:29:08.863 "prchk_guard": false, 00:29:08.863 "ctrlr_loss_timeout_sec": 0, 00:29:08.863 "reconnect_delay_sec": 0, 00:29:08.863 "fast_io_fail_timeout_sec": 0, 00:29:08.863 "psk": "key0", 00:29:08.863 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:08.863 "hdgst": false, 00:29:08.863 "ddgst": false 00:29:08.863 } 00:29:08.863 }, 00:29:08.863 { 00:29:08.863 "method": "bdev_nvme_set_hotplug", 00:29:08.863 "params": { 00:29:08.863 "period_us": 100000, 00:29:08.863 "enable": false 00:29:08.863 } 00:29:08.863 }, 00:29:08.863 { 00:29:08.863 "method": "bdev_wait_for_examine" 00:29:08.863 } 00:29:08.863 ] 00:29:08.863 }, 00:29:08.863 { 00:29:08.863 "subsystem": "nbd", 00:29:08.863 "config": [] 00:29:08.863 } 00:29:08.863 ] 00:29:08.863 }' 00:29:08.863 [2024-04-27 00:13:38.944314] Starting SPDK v24.05-pre git sha1 f1d799ad0 / DPDK 23.11.0 initialization... 00:29:08.863 [2024-04-27 00:13:38.944372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603242 ] 00:29:08.863 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.863 [2024-04-27 00:13:39.002436] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.863 [2024-04-27 00:13:39.066488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.125 [2024-04-27 00:13:39.205153] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:09.697 00:13:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:09.697 00:13:39 -- common/autotest_common.sh@850 -- # return 0 00:29:09.697 00:13:39 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:09.697 00:13:39 -- keyring/file.sh@120 -- # jq length 00:29:09.697 00:13:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.697 00:13:39 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:09.697 00:13:39 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:09.697 00:13:39 -- keyring/common.sh@12 -- # get_key key0 00:29:09.697 00:13:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.697 00:13:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.697 00:13:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.697 00:13:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:09.958 00:13:40 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:09.958 00:13:40 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:09.958 00:13:40 -- keyring/common.sh@12 -- # get_key key1 00:29:09.958 00:13:40 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.958 00:13:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.958 00:13:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:09.958 00:13:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.220 00:13:40 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:10.220 00:13:40 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:10.220 00:13:40 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:10.220 00:13:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:10.220 00:13:40 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:10.220 00:13:40 -- keyring/file.sh@1 -- # cleanup 00:29:10.220 00:13:40 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.vBWusrts99 /tmp/tmp.RBqxs5H3mN 00:29:10.220 00:13:40 -- keyring/file.sh@20 -- # killprocess 603242 00:29:10.220 00:13:40 -- common/autotest_common.sh@936 -- # '[' -z 603242 ']' 00:29:10.220 00:13:40 -- common/autotest_common.sh@940 -- # kill -0 603242 00:29:10.220 00:13:40 -- common/autotest_common.sh@941 -- # uname 00:29:10.220 00:13:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:10.220 00:13:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 603242 00:29:10.220 00:13:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:10.220 00:13:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:10.220 00:13:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 603242' 00:29:10.220 killing process with pid 603242 00:29:10.220 00:13:40 -- common/autotest_common.sh@955 -- # kill 603242 00:29:10.220 Received shutdown signal, test time was about 1.000000 seconds 00:29:10.220 00:29:10.220 Latency(us) 00:29:10.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.220 =================================================================================================================== 00:29:10.220 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:10.220 00:13:40 -- common/autotest_common.sh@960 -- # wait 603242 00:29:10.481 00:13:40 -- keyring/file.sh@21 -- # killprocess 601365 00:29:10.481 00:13:40 -- common/autotest_common.sh@936 -- # '[' -z 601365 ']' 00:29:10.481 00:13:40 -- common/autotest_common.sh@940 -- # kill -0 601365 00:29:10.481 00:13:40 -- common/autotest_common.sh@941 -- # uname 00:29:10.481 00:13:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:10.481 00:13:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 601365 00:29:10.481 00:13:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:10.481 00:13:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:10.481 00:13:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 601365' 00:29:10.481 killing process with pid 601365 00:29:10.481 00:13:40 -- common/autotest_common.sh@955 -- # kill 601365 00:29:10.481 [2024-04-27 00:13:40.590274] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:10.481 00:13:40 -- common/autotest_common.sh@960 -- # wait 601365 00:29:10.743 00:29:10.743 real 0m11.152s 00:29:10.743 user 0m26.310s 00:29:10.743 sys 0m2.679s 00:29:10.743 00:13:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:10.743 00:13:40 -- common/autotest_common.sh@10 -- # set +x 00:29:10.743 ************************************ 00:29:10.743 END TEST keyring_file 00:29:10.743 ************************************ 00:29:10.743 00:13:40 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:10.743 00:13:40 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:10.743 00:13:40 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:10.743 00:13:40 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:10.743 00:13:40 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:10.743 00:13:40 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:10.743 00:13:40 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:10.743 00:13:40 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:10.743 00:13:40 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:10.743 00:13:40 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:10.743 00:13:40 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:10.743 00:13:40 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:10.743 00:13:40 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:10.743 00:13:40 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:10.743 00:13:40 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:10.743 00:13:40 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:10.743 00:13:40 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:10.743 00:13:40 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:10.743 00:13:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:10.743 00:13:40 -- common/autotest_common.sh@10 -- # set +x 00:29:10.743 00:13:40 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:10.743 00:13:40 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:29:10.743 00:13:40 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:29:10.743 00:13:40 -- common/autotest_common.sh@10 -- # set +x 00:29:18.893 INFO: APP EXITING 00:29:18.893 INFO: killing all VMs 00:29:18.893 INFO: killing vhost app 00:29:18.893 INFO: EXIT DONE 00:29:22.201 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:29:22.201 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:29:22.201 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:29:22.201 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:29:22.201 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:29:22.201 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:29:22.201 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:29:22.201 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:29:22.201 0000:65:00.0 (144d a80a): Already using the nvme driver 00:29:22.201 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:29:22.202 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:29:22.202 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:29:22.202 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:29:22.202 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:29:22.202 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:29:22.202 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:29:22.202 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:29:26.406 Cleaning 00:29:26.406 Removing: /var/run/dpdk/spdk0/config 00:29:26.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:26.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:26.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:26.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:26.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:26.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:26.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:26.406 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:26.406 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:26.406 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:26.406 Removing: /var/run/dpdk/spdk1/config 00:29:26.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:26.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:26.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:26.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:26.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:26.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:26.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:26.406 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:26.406 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:26.406 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:26.406 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:26.406 Removing: /var/run/dpdk/spdk2/config 00:29:26.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:26.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:26.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:26.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:26.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:26.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:26.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:26.406 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:26.406 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:26.406 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:26.406 Removing: /var/run/dpdk/spdk3/config 00:29:26.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:26.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:26.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:26.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:26.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:26.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:26.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:26.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:26.406 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:26.406 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:26.406 Removing: /var/run/dpdk/spdk4/config 00:29:26.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:26.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:26.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:26.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:26.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:26.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:26.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:26.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:26.406 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:26.406 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:26.406 Removing: /dev/shm/bdev_svc_trace.1 00:29:26.406 Removing: /dev/shm/nvmf_trace.0 00:29:26.406 Removing: /dev/shm/spdk_tgt_trace.pid180692 00:29:26.406 Removing: /var/run/dpdk/spdk0 00:29:26.406 Removing: /var/run/dpdk/spdk1 00:29:26.406 Removing: /var/run/dpdk/spdk2 00:29:26.406 Removing: /var/run/dpdk/spdk3 00:29:26.406 Removing: /var/run/dpdk/spdk4 00:29:26.406 Removing: /var/run/dpdk/spdk_pid178944 00:29:26.406 Removing: /var/run/dpdk/spdk_pid180692 00:29:26.406 Removing: /var/run/dpdk/spdk_pid181579 00:29:26.406 Removing: /var/run/dpdk/spdk_pid182623 00:29:26.406 Removing: /var/run/dpdk/spdk_pid182967 00:29:26.406 Removing: /var/run/dpdk/spdk_pid184041 00:29:26.407 Removing: /var/run/dpdk/spdk_pid184376 00:29:26.407 Removing: /var/run/dpdk/spdk_pid184652 00:29:26.407 Removing: /var/run/dpdk/spdk_pid185640 00:29:26.407 Removing: /var/run/dpdk/spdk_pid186417 00:29:26.407 Removing: /var/run/dpdk/spdk_pid186762 00:29:26.407 Removing: /var/run/dpdk/spdk_pid187077 00:29:26.407 Removing: /var/run/dpdk/spdk_pid187453 00:29:26.407 Removing: /var/run/dpdk/spdk_pid187822 00:29:26.407 Removing: /var/run/dpdk/spdk_pid188090 00:29:26.407 Removing: /var/run/dpdk/spdk_pid188444 00:29:26.407 Removing: /var/run/dpdk/spdk_pid188831 00:29:26.407 Removing: /var/run/dpdk/spdk_pid190241 00:29:26.407 Removing: /var/run/dpdk/spdk_pid193760 00:29:26.407 Removing: /var/run/dpdk/spdk_pid194150 00:29:26.407 Removing: /var/run/dpdk/spdk_pid194513 00:29:26.407 Removing: /var/run/dpdk/spdk_pid194594 00:29:26.407 Removing: /var/run/dpdk/spdk_pid195070 00:29:26.407 Removing: /var/run/dpdk/spdk_pid195307 00:29:26.407 Removing: /var/run/dpdk/spdk_pid195687 00:29:26.407 Removing: /var/run/dpdk/spdk_pid196021 00:29:26.407 Removing: /var/run/dpdk/spdk_pid196325 00:29:26.407 Removing: /var/run/dpdk/spdk_pid196402 00:29:26.407 Removing: /var/run/dpdk/spdk_pid196769 00:29:26.407 Removing: /var/run/dpdk/spdk_pid196790 00:29:26.407 Removing: /var/run/dpdk/spdk_pid197449 00:29:26.407 Removing: /var/run/dpdk/spdk_pid197678 00:29:26.407 Removing: /var/run/dpdk/spdk_pid198018 00:29:26.407 Removing: /var/run/dpdk/spdk_pid198393 00:29:26.407 Removing: /var/run/dpdk/spdk_pid198558 00:29:26.407 Removing: /var/run/dpdk/spdk_pid198828 00:29:26.407 Removing: /var/run/dpdk/spdk_pid199185 00:29:26.407 Removing: /var/run/dpdk/spdk_pid199530 00:29:26.407 Removing: /var/run/dpdk/spdk_pid199754 00:29:26.407 Removing: /var/run/dpdk/spdk_pid200001 00:29:26.407 Removing: /var/run/dpdk/spdk_pid200306 00:29:26.407 Removing: /var/run/dpdk/spdk_pid200661 00:29:26.407 Removing: /var/run/dpdk/spdk_pid201024 00:29:26.407 Removing: /var/run/dpdk/spdk_pid201377 00:29:26.407 Removing: /var/run/dpdk/spdk_pid201727 00:29:26.407 Removing: /var/run/dpdk/spdk_pid201962 00:29:26.407 Removing: /var/run/dpdk/spdk_pid202210 00:29:26.407 Removing: /var/run/dpdk/spdk_pid202498 00:29:26.407 Removing: /var/run/dpdk/spdk_pid202856 00:29:26.407 Removing: /var/run/dpdk/spdk_pid203259 00:29:26.407 Removing: /var/run/dpdk/spdk_pid203680 00:29:26.407 Removing: /var/run/dpdk/spdk_pid204044 00:29:26.407 Removing: /var/run/dpdk/spdk_pid204394 00:29:26.407 Removing: /var/run/dpdk/spdk_pid204672 00:29:26.407 Removing: /var/run/dpdk/spdk_pid205141 00:29:26.407 Removing: /var/run/dpdk/spdk_pid205637 00:29:26.407 Removing: /var/run/dpdk/spdk_pid206024 00:29:26.407 Removing: /var/run/dpdk/spdk_pid206449 00:29:26.407 Removing: /var/run/dpdk/spdk_pid210995 00:29:26.407 Removing: /var/run/dpdk/spdk_pid266415 00:29:26.407 Removing: /var/run/dpdk/spdk_pid271525 00:29:26.407 Removing: /var/run/dpdk/spdk_pid282142 00:29:26.407 Removing: /var/run/dpdk/spdk_pid288596 00:29:26.407 Removing: /var/run/dpdk/spdk_pid293667 00:29:26.407 Removing: /var/run/dpdk/spdk_pid294352 00:29:26.407 Removing: /var/run/dpdk/spdk_pid308168 00:29:26.407 Removing: /var/run/dpdk/spdk_pid308174 00:29:26.407 Removing: /var/run/dpdk/spdk_pid309176 00:29:26.407 Removing: /var/run/dpdk/spdk_pid310182 00:29:26.407 Removing: /var/run/dpdk/spdk_pid311190 00:29:26.407 Removing: /var/run/dpdk/spdk_pid311866 00:29:26.407 Removing: /var/run/dpdk/spdk_pid311868 00:29:26.407 Removing: /var/run/dpdk/spdk_pid312198 00:29:26.407 Removing: /var/run/dpdk/spdk_pid312217 00:29:26.407 Removing: /var/run/dpdk/spdk_pid312331 00:29:26.407 Removing: /var/run/dpdk/spdk_pid313518 00:29:26.407 Removing: /var/run/dpdk/spdk_pid315005 00:29:26.407 Removing: /var/run/dpdk/spdk_pid316127 00:29:26.407 Removing: /var/run/dpdk/spdk_pid316797 00:29:26.407 Removing: /var/run/dpdk/spdk_pid316805 00:29:26.407 Removing: /var/run/dpdk/spdk_pid317140 00:29:26.407 Removing: /var/run/dpdk/spdk_pid318580 00:29:26.407 Removing: /var/run/dpdk/spdk_pid319939 00:29:26.407 Removing: /var/run/dpdk/spdk_pid329855 00:29:26.407 Removing: /var/run/dpdk/spdk_pid330314 00:29:26.407 Removing: /var/run/dpdk/spdk_pid335471 00:29:26.407 Removing: /var/run/dpdk/spdk_pid342358 00:29:26.407 Removing: /var/run/dpdk/spdk_pid345449 00:29:26.407 Removing: /var/run/dpdk/spdk_pid357760 00:29:26.407 Removing: /var/run/dpdk/spdk_pid369352 00:29:26.407 Removing: /var/run/dpdk/spdk_pid371437 00:29:26.407 Removing: /var/run/dpdk/spdk_pid372496 00:29:26.407 Removing: /var/run/dpdk/spdk_pid393189 00:29:26.407 Removing: /var/run/dpdk/spdk_pid397749 00:29:26.407 Removing: /var/run/dpdk/spdk_pid403204 00:29:26.407 Removing: /var/run/dpdk/spdk_pid405207 00:29:26.407 Removing: /var/run/dpdk/spdk_pid407417 00:29:26.407 Removing: /var/run/dpdk/spdk_pid407568 00:29:26.407 Removing: /var/run/dpdk/spdk_pid407902 00:29:26.407 Removing: /var/run/dpdk/spdk_pid408115 00:29:26.407 Removing: /var/run/dpdk/spdk_pid408684 00:29:26.407 Removing: /var/run/dpdk/spdk_pid410979 00:29:26.407 Removing: /var/run/dpdk/spdk_pid412052 00:29:26.407 Removing: /var/run/dpdk/spdk_pid412562 00:29:26.407 Removing: /var/run/dpdk/spdk_pid415242 00:29:26.407 Removing: /var/run/dpdk/spdk_pid415953 00:29:26.407 Removing: /var/run/dpdk/spdk_pid417128 00:29:26.407 Removing: /var/run/dpdk/spdk_pid422250 00:29:26.407 Removing: /var/run/dpdk/spdk_pid434618 00:29:26.407 Removing: /var/run/dpdk/spdk_pid439438 00:29:26.407 Removing: /var/run/dpdk/spdk_pid446895 00:29:26.407 Removing: /var/run/dpdk/spdk_pid448552 00:29:26.668 Removing: /var/run/dpdk/spdk_pid450232 00:29:26.668 Removing: /var/run/dpdk/spdk_pid455566 00:29:26.668 Removing: /var/run/dpdk/spdk_pid460669 00:29:26.668 Removing: /var/run/dpdk/spdk_pid469892 00:29:26.668 Removing: /var/run/dpdk/spdk_pid469900 00:29:26.668 Removing: /var/run/dpdk/spdk_pid475575 00:29:26.668 Removing: /var/run/dpdk/spdk_pid475834 00:29:26.668 Removing: /var/run/dpdk/spdk_pid476050 00:29:26.668 Removing: /var/run/dpdk/spdk_pid476589 00:29:26.668 Removing: /var/run/dpdk/spdk_pid476594 00:29:26.668 Removing: /var/run/dpdk/spdk_pid482025 00:29:26.668 Removing: /var/run/dpdk/spdk_pid482782 00:29:26.668 Removing: /var/run/dpdk/spdk_pid487991 00:29:26.668 Removing: /var/run/dpdk/spdk_pid491185 00:29:26.668 Removing: /var/run/dpdk/spdk_pid497881 00:29:26.668 Removing: /var/run/dpdk/spdk_pid504152 00:29:26.668 Removing: /var/run/dpdk/spdk_pid512962 00:29:26.668 Removing: /var/run/dpdk/spdk_pid513003 00:29:26.668 Removing: /var/run/dpdk/spdk_pid536365 00:29:26.668 Removing: /var/run/dpdk/spdk_pid537164 00:29:26.668 Removing: /var/run/dpdk/spdk_pid537686 00:29:26.668 Removing: /var/run/dpdk/spdk_pid538432 00:29:26.668 Removing: /var/run/dpdk/spdk_pid539425 00:29:26.668 Removing: /var/run/dpdk/spdk_pid540191 00:29:26.668 Removing: /var/run/dpdk/spdk_pid540962 00:29:26.668 Removing: /var/run/dpdk/spdk_pid541649 00:29:26.668 Removing: /var/run/dpdk/spdk_pid546769 00:29:26.668 Removing: /var/run/dpdk/spdk_pid547112 00:29:26.668 Removing: /var/run/dpdk/spdk_pid554448 00:29:26.668 Removing: /var/run/dpdk/spdk_pid554621 00:29:26.668 Removing: /var/run/dpdk/spdk_pid557421 00:29:26.668 Removing: /var/run/dpdk/spdk_pid564603 00:29:26.668 Removing: /var/run/dpdk/spdk_pid564608 00:29:26.668 Removing: /var/run/dpdk/spdk_pid570749 00:29:26.668 Removing: /var/run/dpdk/spdk_pid573102 00:29:26.668 Removing: /var/run/dpdk/spdk_pid575483 00:29:26.668 Removing: /var/run/dpdk/spdk_pid576921 00:29:26.668 Removing: /var/run/dpdk/spdk_pid579911 00:29:26.668 Removing: /var/run/dpdk/spdk_pid581305 00:29:26.668 Removing: /var/run/dpdk/spdk_pid591270 00:29:26.668 Removing: /var/run/dpdk/spdk_pid591848 00:29:26.668 Removing: /var/run/dpdk/spdk_pid592510 00:29:26.668 Removing: /var/run/dpdk/spdk_pid595478 00:29:26.668 Removing: /var/run/dpdk/spdk_pid596098 00:29:26.668 Removing: /var/run/dpdk/spdk_pid596553 00:29:26.668 Removing: /var/run/dpdk/spdk_pid601365 00:29:26.668 Removing: /var/run/dpdk/spdk_pid601442 00:29:26.668 Removing: /var/run/dpdk/spdk_pid603242 00:29:26.668 Clean 00:29:26.929 00:13:57 -- common/autotest_common.sh@1437 -- # return 0 00:29:26.929 00:13:57 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:29:26.929 00:13:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:26.929 00:13:57 -- common/autotest_common.sh@10 -- # set +x 00:29:26.929 00:13:57 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:29:26.929 00:13:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:26.929 00:13:57 -- common/autotest_common.sh@10 -- # set +x 00:29:27.190 00:13:57 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:27.190 00:13:57 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:27.190 00:13:57 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:27.190 00:13:57 -- spdk/autotest.sh@389 -- # hash lcov 00:29:27.190 00:13:57 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:27.190 00:13:57 -- spdk/autotest.sh@391 -- # hostname 00:29:27.190 00:13:57 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:27.190 geninfo: WARNING: invalid characters removed from testname! 00:29:53.861 00:14:20 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:53.861 00:14:23 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:55.242 00:14:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:56.623 00:14:26 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:58.003 00:14:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:59.913 00:14:29 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:01.825 00:14:31 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:01.825 00:14:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.825 00:14:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:01.825 00:14:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.825 00:14:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.825 00:14:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.825 00:14:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.825 00:14:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.825 00:14:31 -- paths/export.sh@5 -- $ export PATH 00:30:01.825 00:14:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.825 00:14:31 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:01.825 00:14:31 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:01.825 00:14:31 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714169671.XXXXXX 00:30:01.825 00:14:31 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714169671.Lu4RWF 00:30:01.825 00:14:31 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:01.825 00:14:31 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:01.825 00:14:31 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:01.825 00:14:31 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:01.825 00:14:31 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:01.825 00:14:31 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:01.825 00:14:31 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:30:01.825 00:14:31 -- common/autotest_common.sh@10 -- $ set +x 00:30:01.825 00:14:32 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:01.825 00:14:32 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:30:01.825 00:14:32 -- pm/common@17 -- $ local monitor 00:30:01.825 00:14:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:01.825 00:14:32 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=615108 00:30:01.825 00:14:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:01.825 00:14:32 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=615110 00:30:01.826 00:14:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:01.826 00:14:32 -- pm/common@21 -- $ date +%s 00:30:01.826 00:14:32 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=615112 00:30:01.826 00:14:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:01.826 00:14:32 -- pm/common@21 -- $ date +%s 00:30:01.826 00:14:32 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=615115 00:30:01.826 00:14:32 -- pm/common@26 -- $ sleep 1 00:30:01.826 00:14:32 -- pm/common@21 -- $ date +%s 00:30:01.826 00:14:32 -- pm/common@21 -- $ date +%s 00:30:01.826 00:14:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714169672 00:30:01.826 00:14:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714169672 00:30:01.826 00:14:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714169672 00:30:01.826 00:14:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714169672 00:30:02.086 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714169672_collect-vmstat.pm.log 00:30:02.086 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714169672_collect-bmc-pm.bmc.pm.log 00:30:02.086 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714169672_collect-cpu-load.pm.log 00:30:02.086 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714169672_collect-cpu-temp.pm.log 00:30:03.025 00:14:33 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:30:03.025 00:14:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:30:03.025 00:14:33 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:03.025 00:14:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:03.025 00:14:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:03.025 00:14:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:03.025 00:14:33 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:03.025 00:14:33 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:03.025 00:14:33 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:03.025 00:14:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:03.025 00:14:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:03.025 00:14:33 -- pm/common@30 -- $ signal_monitor_resources TERM 00:30:03.025 00:14:33 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:30:03.025 00:14:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:03.026 00:14:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:03.026 00:14:33 -- pm/common@45 -- $ pid=615125 00:30:03.026 00:14:33 -- pm/common@52 -- $ sudo kill -TERM 615125 00:30:03.026 00:14:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:03.026 00:14:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:03.026 00:14:33 -- pm/common@45 -- $ pid=615126 00:30:03.026 00:14:33 -- pm/common@52 -- $ sudo kill -TERM 615126 00:30:03.026 00:14:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:03.026 00:14:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:03.026 00:14:33 -- pm/common@45 -- $ pid=615127 00:30:03.026 00:14:33 -- pm/common@52 -- $ sudo kill -TERM 615127 00:30:03.026 00:14:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:03.026 00:14:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:03.026 00:14:33 -- pm/common@45 -- $ pid=615128 00:30:03.026 00:14:33 -- pm/common@52 -- $ sudo kill -TERM 615128 00:30:03.026 + [[ -n 59065 ]] 00:30:03.026 + sudo kill 59065 00:30:03.034 [Pipeline] } 00:30:03.053 [Pipeline] // stage 00:30:03.058 [Pipeline] } 00:30:03.073 [Pipeline] // timeout 00:30:03.078 [Pipeline] } 00:30:03.092 [Pipeline] // catchError 00:30:03.096 [Pipeline] } 00:30:03.109 [Pipeline] // wrap 00:30:03.115 [Pipeline] } 00:30:03.132 [Pipeline] // catchError 00:30:03.139 [Pipeline] stage 00:30:03.141 [Pipeline] { (Epilogue) 00:30:03.154 [Pipeline] catchError 00:30:03.156 [Pipeline] { 00:30:03.170 [Pipeline] echo 00:30:03.172 Cleanup processes 00:30:03.176 [Pipeline] sh 00:30:03.461 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:03.462 615226 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:03.462 615680 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:03.475 [Pipeline] sh 00:30:03.759 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:03.759 ++ grep -v 'sudo pgrep' 00:30:03.759 ++ awk '{print $1}' 00:30:03.759 + sudo kill -9 615226 00:30:03.772 [Pipeline] sh 00:30:04.061 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:14.192 [Pipeline] sh 00:30:14.481 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:14.481 Artifacts sizes are good 00:30:14.497 [Pipeline] archiveArtifacts 00:30:14.504 Archiving artifacts 00:30:14.682 [Pipeline] sh 00:30:14.967 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:14.984 [Pipeline] cleanWs 00:30:14.996 [WS-CLEANUP] Deleting project workspace... 00:30:14.996 [WS-CLEANUP] Deferred wipeout is used... 00:30:15.003 [WS-CLEANUP] done 00:30:15.006 [Pipeline] } 00:30:15.030 [Pipeline] // catchError 00:30:15.043 [Pipeline] sh 00:30:15.330 + logger -p user.info -t JENKINS-CI 00:30:15.341 [Pipeline] } 00:30:15.358 [Pipeline] // stage 00:30:15.364 [Pipeline] } 00:30:15.384 [Pipeline] // node 00:30:15.391 [Pipeline] End of Pipeline 00:30:15.427 Finished: SUCCESS